report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
|---|---|
The federal government manages about 640 million acres of land in the United States, including lands in national forests, grasslands, parks, refuges, reservoirs, and military bases and installations. Of the total federal lands, BLM and the Forest Service manage about 450 million acres for multiple uses, including grazing, timber harvest, recreation, minerals, water supply and quality, and wildlife habitat. BLM’s 12 state offices manage nearly 250 million acres in 12 western states, and the Forest Service’s 9 regional offices manage more than 190 million acres across the nation (see figs. 1 and 2). The majority of federal lands are located in the western half of the country. The federal government has managed grazing on federal lands for more than 100 years. Following the passage of the Taylor Grazing Act of 1934, the Department of the Interior created the Division of Grazing, later renamed the Grazing Service, to administer provisions of the act. Subsequently, the Grazing Service was merged with the General Land Office to form BLM. The act was passed to stop degradation of public lands caused by overgrazing and soil deterioration; to provide for the orderly use, improvement, and development of public lands; and other purposes. The act also provided for the issuance of permits and leases for these lands and set requirements for the distribution of funds received from grazing. The Forest Service managed grazing under its general authorities until 1950, when Congress enacted the Granger-Thye Act, specifically authorizing the Secretary of Agriculture to issue grazing permits on national forest lands and other lands under the department’s administration. Additional laws affecting grazing on both BLM and western Forest Service lands were enacted in the 1970s. BLM’s and the Forest Service’s range grazing programs administer livestock grazing for permittees. Agency law enforcement assists when necessary—primarily to address grazing violations by nonpermittees that cannot be handled administratively. To provide access to grazing, the agencies divide their rangelands into allotments, which can vary in size from a few acres to hundreds of thousands of acres. Because of the land ownership patterns that occurred when the lands were settled, the allotments can be adjacent to private lands or intermingled with private lands. Under its authorities, BLM issues permits for grazing in allotments within its grazing districts and leases for grazing on BLM-administered lands outside grazing districts. To be eligible for a permit or lease on one of BLM’s allotments, ranchers, among other things, are required to own or control land or water, called a base property, to which preference for obtaining a permit or lease is attached. The Forest Service, which does not have grazing districts, uses permits to authorize grazing in its allotments. To be eligible for a permit under Forest Service policy, ranchers, among other things, must own base property and the livestock to be permitted. The agencies’ permits and leases specify the number and type of livestock allowed on the allotments, the time and duration of use for grazing, and special conditions or use restrictions. Agency field office staff conduct compliance inspections to help ensure that permittees are meeting the terms and conditions of their permits or leases. The agencies may modify permits or leases if range conditions are being degraded or suspend or cancel them if permit conditions are violated. With a few minor exceptions, permittees pay a grazing fee for the use of the federal land. The grazing fee BLM and the Forest Service charge in western states is based on a formula that was originally established by law to prevent economic disruption and harm to the western livestock industry, among other things. The formula expired after 7 years but was extended indefinitely by Executive Order 12,548 and has been incorporated into the agencies’ regulations. The fee derived from the formula is generally lower than the fees charged by other agencies, states, and private ranchers. In grazing year 2016, BLM charged ranchers $2.11 per animal unit month for horses/cattle and $0.42 for sheep and goats; the Forest Service charged the same rates per head month. According to the National Agricultural Statistics Service, based on the average private grazing land lease rate per animal unit month, the commercial value of forage in western states ranged from $9 to $39 in grazing year 2016. As we found in September 2005, the total grazing fees generated by federal agencies amounted to less than one-sixth of the agencies’ expenditures to manage grazing in 2004. We found that BLM and the Forest Service use most of the grazing fee receipts for range protection and improvements and deposit some receipts to the Department of the Treasury’s general fund, with some receipts distributed to states and counties. See appendix II for additional information on grazing, permits, and fees for BLM and the Forest Service. Unauthorized grazing includes instances in which livestock owners graze on BLM or Forest Service allotments without a permit or lease, as well as instances in which those with permits or leases violate the terms and conditions of those documents, such as by grazing more livestock than allowed by permit, grazing in areas that are closed to livestock, or grazing during unauthorized times of the year. It may be unintentional (non-willful) on the part of the livestock owner, such as when livestock stray through an unlatched gate into an area where they are not permitted to graze, or it may be intentional (willful or repeated willful) such as when a livestock owner purposefully grazes livestock in a manner that is not allowed by a permit or grazes livestock without obtaining a permit once or multiple times. Under their applicable regulations, BLM and the Forest Service may address unauthorized grazing by charging permittees penalties for unauthorized grazing; revising their permits; impounding livestock; or taking action that could lead to criminal penalties, most commonly for nonpermittees, as follows: BLM’s grazing regulations establish three levels of unauthorized grazing—non-willful, willful, and repeated willful—with progressively higher penalties for each level. The regulations require that BLM send out a written notice for every potential unauthorized grazing incident. Under certain circumstances, BLM can approve a nonmonetary settlement for non-willful unauthorized grazing. For willful and repeated willful incidents, in addition to the monetary penalties—the value of the forage consumed—the regulations specify that the offender shall be charged for any damages to the land and reasonable agency expenses incurred to resolve the violation, and BLM shall suspend or cancel all or portions of the grazing permit for repeated willful incidents. BLM may impound and dispose of livestock if the owner is unknown or the permittee fails to remove the livestock when ordered. BLM also has the authority to cite permittees and nonpermittees for grazing violations that subject them to criminal penalties. The Forest Service’s grazing regulations require the agency, except in certain circumstances, to determine a grazing use rate for unauthorized grazing. The regulations define unauthorized grazing as (1) livestock not authorized by permit to graze upon the land, (2) an excess number of livestock grazed by permittees, or (3) permitted livestock grazed outside the permitted grazing season or allotment. Under the regulations, the Forest Service can cancel or suspend a permit if the permittee does not comply with provisions and requirements in the grazing permit or applicable regulations. The agency can impound and dispose of unauthorized livestock or livestock in excess of those authorized by a grazing permit if they are not removed from the area within the periods prescribed by regulation. The Forest Service also has the authority to cite permittees and nonpermittees for grazing violations that subject them to criminal penalties. In our December 1990 report on unauthorized grazing on BLM lands, we found that BLM had no systematic method for detecting unauthorized grazing, and when offenses were detected, penalties were rarely assessed. We made five recommendations to improve the effectiveness of the BLM’S unauthorized grazing detection and deterrence efforts: Develop an unauthorized grazing detection strategy that will (1) establish detection as a workload measure and a reportable accomplishment for which managers are held accountable, (2) use visits to randomly selected allotments to provide systematic compliance coverage, and (3) target additional follow-up visits for those livestock operators who have a history of repeated violations. Either (1) ensure that penalties are assessed for all non-willful unauthorized grazing violations as provided for in BLM regulations or (2) amend BLM regulations to establish a procedure for the informal resolution of non-willful unauthorized grazing violations at the local level. Require that all unauthorized grazing incidents—including those now handled informally—be documented and made part of the permanent unauthorized grazing file. Ensure that field staff impose the penalties required under BLM regulations for willful and repeated willful unauthorized grazing. Develop a management information system to provide timely, reliable, and adequate information on such things as (1) the number of compliance visits conducted, (2) the number and level of violations identified, and (3) how each violation is resolved, including those resolved informally. BLM agreed with the recommendations and implemented one of the five by developing an unauthorized grazing detection strategy. The agency took steps toward implementing some of the others, but did not fully implement the remaining four recommendations. The frequency and extent of unauthorized grazing on BLM and Forest Service lands are largely unknown because according to agency officials the agencies prefer to handle most incidents informally and do not record them. The agencies’ databases contained information on nearly 1,500 incidents of unauthorized grazing where formal action was taken by the agencies’ range program or law enforcement field staff for grazing years 2010 through 2014. Unauthorized grazing incidents were recorded in the range management databases when a penalty for unauthorized grazing was billed to a permittee by program staff and in the law enforcement databases when a formal report or notice was entered by a law enforcement officer. However, agency field staff told us that most incidents they identify are handled informally—their preferred practice— and are not recorded in their databases or consistently recorded in paper files. Agency field staff told us that unauthorized grazing can severely degrade the range under certain conditions, such as drought, and also told us of other effects, such as creating conflicts between the agencies’ staff, ranchers, and other stakeholders. The agencies’ databases identified nearly 1,500 incidents of unauthorized grazing where formal action was taken by range program staff or by agency law enforcement officers for grazing years 2010 through 2014; BLM data identified a total of 859 incidents, and Forest Service data identified 618 incidents (see table 1). The agencies’ grazing program field staff generally handle unauthorized grazing by permittees through their administrative process, and law enforcement officers primarily handle unauthorized grazing by those without permits through warnings or criminal citations. Each agency has separate range management and law enforcement databases. For example, unauthorized grazing is recorded in BLM’s range management database when a formal action is taken to send a bill to a permittee for penalties—and in some cases charges for damage to the land or to recoup the administrative expenses of the agency—for incidents of unauthorized grazing. In some cases, BLM may include penalties for more than one incident of unauthorized grazing in one bill. The Forest Service’s range management database contains incidents where a formal action was taken to send a bill for penalties for unauthorized grazing incidents. The law enforcement databases of both agencies contain incidents where formal documentation, such as an incident report (record of observation), warning notice, or violation notice was prepared by a law enforcement officer and entered into the database. See appendix III for detailed information on the extent and frequency of unauthorized grazing formally reported in the agencies’ databases. The full extent and frequency of unauthorized grazing is unknown because most unauthorized grazing incidents identified by the agencies’ range program field staff are handled informally and are not recorded in their databases, according to agency officials. We found that these incidents were inconsistently documented in their paper files. The databases do not include incidents that are informally resolved with telephone calls or by visits from the agency program staff to the permittees asking them to remove their livestock from areas where they are not permitted. Staff we interviewed from all 22 BLM and Forest Service field offices told us they prefer such informal resolutions, particularly for incidents that appear to be non-willful and involve a few head of livestock with no resource damage. Agency staff said that these types of incidents account for the majority of unauthorized grazing they encounter. According to these field staff, the informal resolution allows them to resolve the problem quickly and remain focused on higher-priority activities, such as preparing environmental analyses, while maintaining collaborative and cooperative relations with permittees, who field staff said are largely compliant with their permits. Agency field staff from both agencies told us that they maintain paper files for permittees that may contain notes on informally resolved unauthorized grazing incidents that are not included in the databases, or may record a telephone call to a permittee in their telephone log. However, they said that such information is not consistently recorded in the permittee files, in part because they do not consider recording such information a priority. As a result, the agencies do not have complete information on unauthorized grazing and therefore may not have the documentation needed to deal with any instances of repeat offenders appropriately. Federal internal control standards call for agencies to clearly document all transactions and other significant events in a manner that allows the documentation to be readily available for examination. This provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as a means to communicate that knowledge as needed to external parties, such as external auditors. Until the agencies require that all incidents of unauthorized grazing be recorded, including those incidents resolved informally, BLM and the Forest Service will not have a complete record of unauthorized grazing incidents for tracking patterns of any potential repeat offenders. Unauthorized grazing may create various effects, such as severely degrading rangelands under certain conditions. Joint BLM/Forest Service riparian area management guidance states that compliance monitoring of grazing is critical because just a few weeks of unauthorized grazing can set back years of progress in restoring riparian areas—such as the narrow bands of green adjoining rivers, streams, or springs. Agency field staff we interviewed from 17 out of the 22 offices told us that under certain circumstances, unauthorized grazing can be more damaging than permitted grazing, such as when livestock are allowed into closed riparian areas during times of low precipitation or drought or graze in pastures earlier than permitted in the spring when grass is first sprouting. Stakeholders told us that the loss of native grass through unauthorized overgrazing may allow invasive species such as cheatgrass to grow, creating a potential fire hazard, or may result in a loss of habitat for threatened species such as sage grouse. During our field visits, we observed locations where unauthorized grazing had resulted in severely damaged natural springs, overgrazed meadows, and trampled streambeds. Agency field staff provided photographs showing unauthorized grazing in protected habitat areas and the effects of overgrazing from unauthorized use (see figs. 3, 4, and 5). Agency staff and stakeholders told us that unauthorized grazing can strain relationships and cause conflicts among various groups. Various stakeholders, such as range protection advocates and others, told us that they often observe unauthorized livestock grazing on the agencies’ allotments in the course of their resource monitoring or other activities and notify agency field staff. They are frustrated when it appears that the agencies do not take action. Agency staff we interviewed from 15 out of the 22 field offices told us that they are not always able to confirm and take action on such reporting because it is not timely or lacks specificity, and many staff said that following up to confirm such reports takes them away from higher-priority responsibilities. Agency staff also told us that permittees get frustrated if they do not take prompt action to stop unauthorized grazing by others, such as nonpermittees, which can also lead to conflicts among ranchers, for example, if a nonpermittee’s stray livestock consume the forage on a permittee’s allotment through unauthorized grazing. According to a wild horse advocate we interviewed, the advocate had experienced threats from ranchers engaged in unauthorized grazing on the range while the advocate was working with BLM to protect and manage the horses. Agency field staff and stakeholders told us there are only a small number of confrontational ranchers who do not recognize the agencies’ authority to manage the range and engage in willful unauthorized grazing, but they are concerned that the problem will grow. Agency field staff we interviewed from 5 out of the 22 field offices told us that high-profile cases of intentional unauthorized grazing and related antigovernment protests can affect agency decision making regarding enforcement, and staff at 4 out of the 22 field offices told us that not taking enforcement action on violators is likely to encourage more unauthorized grazing. For example, staff at one Forest Service office in Oregon told us that they were prepared to suspend a rancher’s permit for repeated unauthorized grazing violations but decided not to because of the standoff by antigovernment activists at Malheur National Wildlife Refuge. Agency staff we interviewed from 6 of the 22 field offices told us that lack of support from higher-level managers for strong enforcement action does not incentivize field staff to act on unauthorized grazing and, in some cases, lowers staff morale. The leaders of two stakeholder groups, Western Watersheds Project and Public Employees for Environmental Responsibility, jointly wrote a letter to the Secretary of the Interior in 2015 to express concern about the lack of effective range management of BLM lands in Nevada because of what they characterized as higher-level pressure on local managers to accept ranchers’ demands when settling unauthorized grazing incidents; agency staff from three of the local offices we spoke with shared this concern. BLM responded to the stakeholders’ letter on behalf of the Secretary, stating that the agency is committed to collaborating with permittees to resolve problems that reflect the interests of affected communities while also ensuring that public lands are managed and conserved for the future. Agency field staff we interviewed from 14 out of the 22 offices told us they generally do not have safety concerns while performing their duties, or did not mention any such concerns, even with the potential for confrontational tactics by some ranchers. BLM and Forest Service law enforcement officials told us that the overall trend for assaults and threats to agency staff had been down in recent years, but they do not track assaults and threats specifically related to grazing incidents. However, BLM field staff in Southern Nevada were directed by the state office not to visit grazing allotments after an armed standoff with a rancher over the agency’s impoundment of his cattle for unauthorized grazing. At one BLM field office we visited in Northern Nevada, there was a protest site established across the street in response to the office’s efforts to enforce unauthorized grazing regulations (see fig. 6). Field staff told us that as a result of a statewide BLM assessment, the office upgraded its security to include video cameras, card key locks, and entrance barricades. Finally, unauthorized grazing that is not detected or not formally acted on when identified cannot be billed penalties for unauthorized grazing, resulting in forgone revenues. The agencies track penalties for unauthorized grazing billed and collected but do not track those forgone. Based on information from the agencies’ databases, BLM and the Forest Service collected nearly $450,000 for unauthorized grazing in grazing years 2010 through 2014. BLM collected about $426,000 and has a balance due of about $8,000 for unauthorized grazing during that time frame. The Forest Service collected about $24,000 and reported no balance due for the same time frame. BLM and the Forest Service undertake similar efforts to detect and deter unauthorized grazing, such as conducting compliance inspections on grazing allotments and charging penalties for unauthorized grazing, but agency field staff told us that such efforts have limited effectiveness for various reasons. While it is the preferred practice of agency field staff to resolve incidental unauthorized grazing informally, BLM and Forest Service regulations do not provide agency staff with the flexibility to resolve incidents informally with no written notice of violation and no penalty for unauthorized grazing charged. BLM and the Forest Service have undertaken a number of similar efforts to detect and deter unauthorized grazing. These include conducting compliance inspections, charging penalties for unauthorized grazing, issuing willful and repeated willful violations, modifying permits, and issuing criminal citations. However, BLM and Forest Service field staff we spoke with said that these efforts can have limited effectiveness in practice for various reasons, such as field staff being unavailable to conduct compliance inspections because of other priorities or the penalty for unauthorized grazing being lower than the current commercial value of forage. Field staff from both agencies told us that conducting compliance inspections is one of their more effective efforts for detecting and deterring unauthorized grazing. Specifically, staff we interviewed from 16 of the 22 agency offices said that compliance inspections are always or usually effective in detecting unauthorized grazing, and staff from 13 of the 22 said that such inspections are always or usually an effective deterrent. However, field office staff we spoke with told us that they have a limited number of knowledgeable staff—in part because of significant staff turnover, including transfers and retirements—administering vast acres of rangeland, and growing workloads that require multitasking and spending significant time in the office. In addition, grazing allotments are often in remote locations that can take hours to access by vehicle, horseback, or hiking. As a result, they said that compliance inspections are not a top priority and some allotments are seldom visited, which may diminish inspections’ deterrent effect. The number of field range staff available to conduct compliance inspections declined for both agencies from 2010 to 2014—from 1,829 to 1,795 for BLM and from 443 to 399 for the Forest Service. On average, each BLM range staff member is responsible for approximately 85,000 acres, and each Forest Service range staff member is responsible for approximately 255,000 acres. At one BLM field office in Utah, field staff told us that 2 range staff are responsible for 2 million acres and that competing work priorities often keep these staff in the office rather than out in the field. Many field staff said they focus inspections on areas with a history of compliance issues but that some unauthorized grazing likely goes undetected. Agency field staff—primarily those from the Forest Service—told us that penalties for unauthorized grazing are too low under current agency policy to act as an effective deterrent. Field staff we interviewed from 6 out of the 9 Forest Service offices and 4 out of the 13 BLM offices said that penalties for unauthorized grazing are rarely or never an effective deterrent. As a result, some told us that there are permittees who view the penalties for unauthorized grazing as a cost of doing business because paying the penalties is cheaper than seeking forage elsewhere. For example, Forest Service staff at one field location told us that they are reluctant to send a bill for penalties for unauthorized grazing because it shows how low the penalty is and may encourage additional unauthorized grazing. We found that for grazing years 2008 through 2014, the Forest Service penalty for unauthorized grazing was $2.51 or less per head month, which was substantially less than BLM’s penalty for unauthorized grazing. The Forest Service calculates this penalty using the same formula that it and BLM use each year to calculate the permitted grazing fee. The formula for the permitted fee has a preset base value of $1.23 and other input values, such as the prices of private forage and beef cattle, which can vary annually. To calculate its penalty for unauthorized grazing using this formula, the Forest Service applies a higher preset base value of $3.80 rather than $1.23. (For more detailed information on the formula and calculation, see app. II.) For grazing years 2009 through 2012, the Forest Service’s unauthorized grazing penalty formula calculation would have resulted in a negative number or a number lower than the permitted grazing fee. To address this situation, a Forest Service official told us that the agency decided to hold the penalty for unauthorized grazing at $2.24 per head month until the formula calculation resulted in a higher penalty. In contrast, as shown in table 2, the BLM penalty for non-willful unauthorized grazing—based on commercial forage rates in each state— ranged from $8 to $33.50 per animal unit month for grazing years 2008 through 2014, and BLM doubled the penalty for willful incidents and tripled it for repeated willful incidents. In addition, with higher-level offensives (willful and repeated willful), BLM regulations require unauthorized grazing bills to also include “all reasonable expenses incurred by the United States in detecting, investigating, resolving violations, and livestock impoundment costs.” Compared to BLM’s penalties, the Forest Service penalty for unauthorized grazing is less likely to be a deterrent for unauthorized grazing, and the differing penalty structures result in inconsistency between the two federal agencies. As we noted in March 2003, penalties generally should be designed in such a way as to serve as a deterrent for unauthorized activities. Forest Service regulations incorporate Office of Management and Budget guidance, which directs that a fair market value be obtained for all services and resources provided to the public through establishment of a system of reasonable fee charges. By adopting a penalty structure for unauthorized grazing use that is, similar to BLM’s, based on the current commercial value of forage (a fair market value), the Forest Service’s penalty for unauthorized grazing can better serve as a deterrent to such grazing and be consistent with BLM’s penalty. The Forest Service recognized that its formula for calculating its penalty for unauthorized grazing was problematic in grazing year 2009 when the formula produced a negative value. A Forest Service official told us that the agency is considering options for revising the penalty as part of its ongoing update of grazing guidance, but the update has not been completed because of higher priorities. The Forest Service does not have a time frame for when the penalty for unauthorized grazing will be revised, according to agency officials. Until the Forest Service revises its penalty for unauthorized grazing to reflect current forage rates, similar to BLM’s, the penalty has limited value as a deterrent to unauthorized grazing. BLM field staff generally told us that willful and repeated willful unauthorized grazing incidents are rare; most unauthorized grazing is incidental and non-willful. However, staff we interviewed from 3 of the 13 BLM field offices who had encountered willful and repeated willful unauthorized grazing incidents said that such violations are difficult to support because staff must prove that the unauthorized grazing was the fault of the livestock owner and show that a record of prior willful violations existed for repeat offenses, per agency regulations and policy. As mentioned previously, because BLM staff generally prefer informal resolution for most incidents of unauthorized grazing, there may not be a paper trail documenting repeated incidents. In some offices this was exacerbated by staff turnover. Specifically, field staff we interviewed from 7 of the 22 offices told us that institutional knowledge is lost when staff depart who are familiar with the extent and circumstances of unauthorized grazing that was resolved informally. As a result, BLM staff told us that they generally only pursue willful or repeated willful violations for the most egregious, long-term cases of unauthorized grazing. Agency regulations also direct BLM staff to collect reasonable agency expenses for resolving willful and repeated willful incidents, but field staff told us that they have discretion in determining what is reasonable and therefore may not charge violators for agency expenses. For example, field staff said that they may agree to waive the expenses if they were insignificant or to make it less likely that the permittee will appeal the decision. Our review of willful and repeated willful unauthorized grazing incidents in BLM’s grazing program database from grazing years 2010 through 2014 found that the administrative expenses were billed to violators in 98 out of 164, or 60 percent, of such incidents. We reviewed the paper file documentation for BLM’s 24 willful and 3 repeated willful unauthorized grazing cases in grazing year 2014, and found that in most cases field staff had documented how they determined the appropriate penalties and expenses to bill. Agency staff and cattlemen’s association representatives told us that the agencies’ policies for modifying permits, such as reducing the number of permitted livestock for an allotment or suspending or canceling the permits, are likely to be the greatest deterrent to unauthorized grazing, in part because they directly affect the permittees’ livelihoods. Field staff we interviewed from 18 of the 22 offices said that permit modifications are always or usually an effective deterrent. In practice, field staff from 19 of the 22 said that they generally view this as a last resort penalty and seldom modify, suspend, or cancel permits for unauthorized grazing in part because the warning is usually sufficient to obtain compliance. In one example, Forest Service staff at an office in Nevada said they had canceled only one permit, for a permittee with a particularly long record of persistent unauthorized grazing. Staff said that a warning about the potential for permit action is generally enough to achieve immediate compliance in almost all detected unauthorized grazing cases involving permittees. According to agency field staff, misdemeanor criminal citations are primarily issued to nonpermittees for unauthorized grazing and can be an effective deterrent. However, law enforcement officers and program staff we interviewed from 5 out of the 22 offices told us that federal attorneys may choose not to prosecute citations or the courts may lower the penalties, which may diminish the effectiveness of this deterrent. For example, a Forest Service law enforcement officer in Utah said that circuit courts typically lower penalties to a couple hundred dollars or less, which is below the cost of buying forage elsewhere. Furthermore, law enforcement officers and program staff we interviewed from 7 out of the 22 offices told us that when on patrol the officers are generally focused on higher priorities, such as public safety. In addition, staff from 7 of the 22 offices we interviewed said that the officers usually do not have knowledge of permit conditions and therefore do not know when livestock should or should not be in a certain location. BLM and Forest Service regulations do not provide field staff of both agencies with the flexibility to follow their preferred practice of informally resolving unauthorized grazing incidents with no written notice of violation and no penalty for unauthorized grazing. We recommended in 1990 that BLM either ensure that all penalties are assessed for non-willful unauthorized grazing, as provided for in its regulations, or amend its regulations to establish a procedure for informal resolution. The agency amended its regulations to add the option for the nonmonetary resolution of certain non-willful incidents, but the amendment did not remove the requirement for a written notice of violation. Forest Service regulations do not specifically require a written notice of violation but require that a penalty be determined; nonmonetary resolution is not an option. As a result, informal resolution with no written notice and no penalty—the preferred practice for field staff in dealing with unauthorized grazing—is not allowed for under either agency’s regulations. While not provided for under the regulations, most agency field staff told us that informal resolution is the most effective way to achieve the objective of quickly resolving non-willful unauthorized grazing with minimal conflict, and is the most efficient use of their time given multiple higher-priority responsibilities. As discussed in federal internal control standards, program operations are effective and efficient in achieving agency objectives when they produce the intended results and minimize the waste of resources. Management is responsible for designing the policies and procedures to fit an entity’s circumstances and building them in as an integral part of the entity’s operations. BLM and Forest Service officials stated that handling incidental unauthorized grazing informally is necessary and effective because they have limited staff and permittees tend to be largely compliant. However, the agencies have not established in regulations procedures for such informal resolution or alternatively taken steps to ensure that staff comply with existing regulations as written. By amending the regulations to establish procedures for the informal resolution of violations of the grazing regulations at the local level, agency management could achieve the objective of quickly resolving incidental unauthorized grazing with minimal conflict, in a manner consistent with its regulations and with the most efficient use of the agency’s resources. Alternatively, rather than amending their existing regulations to match their practices, the agencies could change their practices to comply with their existing regulations. BLM officials told us that the agency has faced challenges in revising its grazing regulations, including the incorporation of our 1990 recommendations; the most recent revision was enjoined by the court from implementation in 2006 after it was challenged by interest groups. The Code of Federal Regulations currently contains the enjoined regulations; agency officials plan to replace these regulations with the regulations that were in effect prior to the court’s action but have not set a date for completing the process. Furthermore, BLM has not updated its Unauthorized Grazing Use Handbook since 1987—in part because of the enjoined regulations— and it contains guidance that differs in some cases from the existing regulations. For example, the handbook does not reference the option of nonmonetary settlement for certain non-willful unauthorized grazing incidents that is contained in the regulations. In addition, the handbook description of penalties differs from that in the regulations for willful violations—the regulations state that the rate is twice the value of forage consumed, while the handbook states that the rate is three times the value of forage consumed. Furthermore, the regulations state that the value of damages to public lands shall be included in settlement for willful and repeated willful violations, and the handbook states generally that the value of damages “must be charged,” without specifying which violations must incur the charge. As a result, staff using the handbook may not be consistently following the regulations. Federal internal control standards call for agency management to periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. Without revising the agency’s grazing guidance to make it consistent with the grazing regulations, BLM does not have reasonable assurance that its staff consistently apply the grazing regulations. BLM and the Forest Service face the daunting task of effectively managing grazing on millions of acres of remote rangeland with a limited number of field staff who have multiple responsibilities and competing priorities. Given the large number of acres and permits managed under the agencies’ programs, the number of unauthorized grazing incidents that are formally reported is relatively small, and the reportedly larger number of incidents that are resolved informally and not recorded in any database or consistently recorded in paper case files are most often considered by agency field staff to be incidental and quickly remedied with minimal impact on range resources. By amending the regulations to establish procedures for the informal resolution of non-willful violations of the grazing regulations at the local level, agency management could achieve the objective of quickly resolving incidental unauthorized grazing with minimal conflict, in a manner consistent with its regulations and with the most efficient use of the agency’s resources. Alternatively, rather than amending their existing regulations to match their practices, the agencies could change their practices to comply with their existing regulations. While it may be reasonable for the agencies to handle incidental unauthorized grazing informally, given their limited staff and a largely compliant pool of permittees, it is important that each agency’s practices accurately reflect its grazing regulations to ensure clarity and consistency in application for staff and permittees. Furthermore, without recording the incidents of unauthorized grazing that are informally resolved, neither agency has complete information on the extent and frequency of unauthorized grazing for tracking patterns of any potential repeat offenders. In addition, until BLM revises its grazing guidance to make it consistent with the grazing regulations, the agency does not have reasonable assurance that its staff consistently apply the regulations. Finally, until the Forest Service revises its unauthorized grazing penalty structure to reflect the current value of forage, similar to BLM, the deterrent effect of the penalty will be limited, and some ranchers will continue to view the penalty as a cost of doing business. To improve the effectiveness of BLM’s efforts to track and deter unauthorized grazing, we recommend that the Secretary of the Interior direct the Director of BLM to take the following three actions: amend the regulations on unauthorized grazing use—43 C.F.R. Subpart 4150 (2005)—to establish a procedure for the informal resolution of violations at the local level, or follow the existing regulations by sending a notice of unauthorized use for each potential violation as provided by 43 C.F.R. § 4150.2(a) (2005); record all incidents of unauthorized grazing, including those resolved informally; and revise the agency’s Unauthorized Grazing Use Handbook to make it consistent with 43 C.F.R. pt. 4100 (2005). To improve the effectiveness of the Forest Service’s efforts to track and deter unauthorized grazing, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following three actions: amend the regulations on range management—36 C.F.R. pt. 222—to provide for nonmonetary settlement when the unauthorized or excess grazing is non-willful and incidental, or follow the existing regulations by determining and charging a grazing use penalty for all unauthorized and excess use when it is identified as provided by 36 C.F.R. § 222.50(a) and (h); record all incidents of unauthorized grazing, including those resolved informally; and adopt an unauthorized grazing penalty structure that is based, similar to BLM’s, on the current commercial value of forage. We provided the Departments of Agriculture and the Interior with a draft of this report for their review and comment. In its written comments, reproduced in appendix IV, the Forest Service generally concurred with our findings and recommendations. In its comments, the Forest Service stated that it has taken preliminary steps toward updating its guidance to field units, including guidance for unauthorized grazing penalties similar to BLM’s. In its written comments reproduced in appendix V, the Department of the Interior generally concurred with our findings and recommendations. In its comments, the Department of the Interior stated that it will revise its guidance to better describe procedures for following existing regulations, to provide procedures for documenting and recording all unauthorized grazing incidents, and will ensure that its guidance is consistent with its regulations. The Department of the Interior also provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Agriculture and the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to (1) describe what is known about the frequency and extent of unauthorized grazing, and its effects, and (2) examine the Bureau of Land Management’s (BLM) and the U.S. Forest Service’s efforts to detect, deter, and resolve unauthorized grazing. To describe the frequency and extent of unauthorized grazing, we analyzed the agencies’ unauthorized grazing data, and to describe the effects of such grazing, we reviewed documentation, interviewed agency officials and stakeholder group representatives, and conducted site visits at agency field office locations. We collected data from BLM’s and the Forest Service’s range management, financial, and law enforcement databases on the frequency and extent of unauthorized grazing for grazing years 2010 through 2014, the most recent and complete data available at the time of our review. We also collected information on grazing acres, usage, and permits, which came from different years depending on what was the most recently available at the time of our request. For BLM, we obtained range management data from its Rangeland Administration System; financial data on unauthorized grazing bills from its Collection and Billing System; and law enforcement data from its Incident Management, Analysis, and Reporting System. For the Forest Service, we obtained range management and billing data from its INFRA system and law enforcement data from its Law Enforcement and Investigations Management Attainment Reporting System. We assessed the data provided by the agencies based on our review of database system documentation and discussions with agency database stewards and found the data to be sufficiently reliable for our purposes. We conducted in-person and telephone interviews with staff in 22 of the 218 agency field office locations in eight western states where most such grazing had occurred. We selected the 22 offices from among the agency field offices that had the highest numbers of unauthorized grazing incidents or that had been recommended by stakeholders. From the 22 selected offices, we conducted site visits to 6 offices located in Nevada and Wyoming to interview agency range management and law enforcement staff about the extent of unauthorized grazing and the agencies’ policies and practices for addressing it, as well as to review paper case files and observe the effects of unauthorized grazing on federal lands. We also conducted telephone interviews with staff in 16 of the 22 BLM and Forest Service field locations in California, Colorado, Idaho, Nevada, New Mexico, Oregon, and Utah. Our interview results are not generalizable to all agency field office locations and grazing lands and instead are illustrative cases of the office locations reporting the highest numbers of unauthorized grazing incidents. Tables 3 and 4 provide more information about the agency field office locations where we conducted interviews. To obtain the views of interested stakeholders, we conducted interviews with representatives of 11 stakeholder groups, including telephone interviews with cattlemen’s association representatives in California, Colorado, Nevada, New Mexico, and Oregon. We also conducted telephone interviews with representatives of other stakeholders, including Public Employees for Environmental Responsibility, Forest Service Employees for Environmental Ethics, Western Watersheds Project, Wildlands Defense, and others, such as a wild horse advocate. We selected these groups based on information provided by agency officials or other stakeholder groups involved in grazing issues; in one case, we spoke with a stakeholder who contacted us after learning of our review. We qualitatively analyzed agency and stakeholder interviews for common themes and patterns to describe how the agencies address unauthorized grazing and the effectiveness of these policies and practices. We coded interviews using qualitative data analysis software that allows organization and analysis of information from a variety of sources. Our coding process involved one independent coder putting information into initial categories and a second independent coder verifying that initial work. The coders discussed and resolved any discrepancies in coding. To examine the agencies’ efforts to detect, deter, and resolve unauthorized grazing, we analyzed federal laws to identify agency requirements for addressing such grazing as well as the agencies’ regulations, policies, and practices. We qualitatively analyzed information obtained in agency and stakeholder interviews for common themes and patterns to describe how the agencies address unauthorized grazing and the effectiveness of their efforts. We compared the agencies’ policies to their practices in the field, compared the policies’ objectives with their outcomes, and assessed the internal controls for the policies and practices. We also compared the agencies’ policies and practices to our recommendations in our December 1990 report to evaluate whether those recommendations have made or could make improvements in the detection and deterrence of unauthorized grazing. We conducted this performance audit from May 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides detailed information on grazing permits, leases, fees, and penalties on lands managed by the Bureau of Land Management (BLM), within the Department of the Interior, and the U.S. Forest Service, within the Department of Agriculture. Specifically, the information includes acres available for grazing on lands the agencies manage, the animal unit months (AUM) approved for grazing, and the AUMs billed for BLM and the Forest Service; BLM and Forest Service permits and leases by size; and information on BLM and Forest Service grazing fees for permitted grazing and penalties for unauthorized grazing. The agencies are in two different departments and their grazing programs are covered by different laws and regulations. Therefore, the agencies maintain their own databases and, in some cases, track different data elements. As a result, consistent information was not always available from the two agencies, and in some cases the information provided was from different years depending on what was the most recently available at the time of our request. This section provides an overview of the most recent information available at the time of our review on grazing that occurred on BLM and Forest Service lands. The acres of BLM and Forest Service land available for grazing each year can change, depending on the results of environmental assessments conducted on grazing allotments, and the amount of grazing that is allowed each year can change, depending on annual assessments of forage and range conditions. Both agencies measure the number of acres of their lands available for grazing by allotment each year, but the two agencies use different terms to measure the amount of grazing. BLM calls this amount active or authorized, and the Forest Service calls this amount permitted. Similarly, BLM refers to the amount of grazing that it bills for annually—which can vary from the amount it authorizes because of range or climate conditions—as billed, and the Forest Service refers to this amount of grazing as authorized. We use “AUMs approved” to refer to the amounts of grazing authorized by BLM and permitted by the Forest Service and “AUMs billed” to refer to the amount of grazing for which BLM billed ranchers and the amount of grazing authorized each year on Forest Service lands. Table 5 shows the acres and AUMs approved as of January 2016 and AUMs grazed for BLM’s field offices in fiscal year 2014, the most recent year available. Table 6 shows the acres of grazing available, approved AUMs, and billed AUMs in grazing year 2015 for Forest Service administrative offices and grasslands. The data on acres include acres in active and vacant allotments but not in allotments that have been closed that are not available for grazing. The data on AUMs include data that the Forest Service calls head months. Unlike BLM, the Forest Service uses two methods to tally the amount of grazing that occurs—AUMs and head months. The agency uses AUM to refer to the amount of forage consumed by different types of livestock, while it uses the term head months to refer to the number of livestock (head) that are grazed and that are subject to billing. We used the Forest Service head month data because they are equivalent to the BLM’s data on billed AUMs, but we used AUM to simplify the comparison with BLM’s grazing data. Because the number of AUMs per permit or lease can vary greatly, the number of AUMs controlled by permittees or lessees also varies greatly. Tables 7 through 9 show the number of BLM and Forest Service permits and leases, and AUMs, by permit size. Multiple permits or leases may be contained on a single allotment, just as one permit or lease may span multiple allotments. In addition, several ranchers may share one permit or lease, just as one rancher may possess multiple permits or leases; therefore, the number of permits and leases does not necessarily correlate to the total number of ranchers. Table 7 shows the size of BLM permits and leases, using approved AUMs as of December 2015. The data do not include permits and leases with less than two AUMs. Table 8 shows Forest Service permits for cattle for regions with lands in western states (regions 1 through 6). The data do not include horses or other livestock and do not include permits with fewer than two AUMs of grazing for cattle. Forest Service sheep permits are shown in table 9. For the purposes of conversion, five sheep equal one AUM. In addition to the sheep, an insignificant number of horses are included in the data because, in some cases, permittees may keep a horse for herding the sheep. Historically, BLM and Forest Service permitted grazing fees were established to achieve different objectives—to recover administrative expenses or to reflect livestock prices, respectively—but the agencies began using the same approach to setting fees in 1969. Over the years, the agencies, as well as outside entities, have conducted numerous studies attempting to establish a permitted grazing fee that meets the objectives of multiple parties. As of March 2016, the permitted grazing fee for BLM and the Forest Service in 16 western states is based on a formula which incorporates factors that take into account ranchers’ ability to pay and was established in 1978 based on studies conducted in the 1960s and 1970s. In 2016, the permitted grazing fee for lands managed by BLM and the Forest Service in 16 western states was $2.11 per AUM—or the amount of forage needed to sustain a cow and her calf for 30 days. This permitted grazing fee is set annually according to a formula established in the Public Rangelands Improvement Act of 1978 and extended indefinitely by Executive Order 12,548 that has been incorporated into the agencies’ regulations. The formula is as follows: Fee = $1.23 x (FVI +BCPI – PPI)/100 where $1.23 = the base value, or the difference between the costs of conducting ranching business on private lands, including any grazing fees charged, and public lands, not including grazing fees. The costs were computed in a 1966 study that included 10,000 ranching businesses in the western states. FVI = Forage Value Index, or the weighted average estimate of the annual rental charge per head per month for pasturing cattle on private rangelands in 11 western states (Arizona, California, Colorado, Idaho, Montana, New Mexico, Nevada, Oregon, Utah, Washington, and Wyoming) divided by $3.65 per head month (the private grazing land lease rate for the base period of 1964-68) and multiplied by 100. BCPI = Beef Cattle Price Index, or the weighted average annual selling price for beef cattle (excluding calves) in the 11 western states divided by $22.04 per hundredweight (the beef cattle price per hundred pounds for the base period of 1964-68) and multiplied by 100. PPI = Prices Paid Index, for selected components from the Department of Agriculture’s National Agricultural Statistics Service’s Index of Prices Paid by Farmers for Goods and Services, adjusted by different weights (in parentheses) to reflect livestock production costs in the western states [fuels and energy (14.5), farm and motor supplies (12.0), autos and trucks (4.5), tractors and self-propelled machinery (4.5), other machinery (12.0), building and fencing materials (14.5), interest (6.0), farm wage rates (14.0), and farm services (cash rent) (18.0)]. The Public Rangelands Improvement Act of 1978 limited the annual increase or decrease in the resulting fee to 25 percent. It also established the fee formula for a 7-year trial period and required that the effects of the fee be evaluated at the end of that period. Although the permitted grazing fee formula under the act expired in 1986, the use of the fee formula was extended indefinitely by Executive Order 12,548 and incorporated into the agencies’ regulations. The executive order requires the Secretaries of the Interior and Agriculture to establish permitted grazing fees according to the act’s formula, including the 25 percent limit on increases or decreases in the fee. In addition, the order established that the permitted grazing fee should not be lower than $1.35 per AUM. To calculate its penalty for unauthorized grazing, the Forest Service uses the same formula as for the permitted fee but replaces the base value of $1.23 with a higher base value of $3.80. In addition, the Forest Service does not apply the 25 percent limit on the annual increase or decrease in the penalty and does not set a lower limit on the penalty as with the permitted fee formula (see table 10). In contrast, BLM bases its penalties for unauthorized on the state by state commercial value of forage. According to the National Agricultural Statistics Service, based on the average private grazing land lease rate per AUM, the state-by-state commercial value of forage in western states ranged from $9 to $39 in grazing year 2016. This appendix provides detailed information on the extent and frequency of unauthorized grazing incidents and charges recorded in the Bureau of Land Management’s (BLM) and the U.S. Forest Service’s range management and law enforcement databases, for grazing years 2010 through 2014. BLM, within the Department of the Interior, and the U.S. Forest Service, within the Department of Agriculture, are in two different departments and their grazing programs are covered by different laws and regulations. Therefore, the agencies maintain their own databases and, in some cases, track different data elements. As a result, consistent information was not always available from the two agencies. BLM’s range management database contained records of 433 unauthorized grazing incidents that occurred in grazing years 2010 through 2014 and were settled and billed by December 28, 2015 (the date the data were queried) (see table 11). Incidents not billed by December 28, 2015, are not included, nor are incidents that were resolved nonmonetarily. The number of incidents ranged from 76 in Idaho to 5 in Arizona. The bills identified for the 433 incidents in BLM’s range management database included 466 charges for different types of unauthorized grazing; non-willful (unintentional), willful (intentional), and repeated willful, each of which is charged at a different rate (see table 12). The total charges (466) exceeds the total number of incidents settled and billed (433) because each bill can include charges for more than one type of unauthorized grazing and for more than 1 grazing year. Non-willful unauthorized grazing was the most common type in grazing years 2010 through 2014, accounting for 299—or 64 percent—of the charges recorded; willful unauthorized grazing was 31 percent of the total, and repeated willful was 5 percent. BLM’s unauthorized grazing bills included charges for unauthorized grazing penalties; administrative charges for costs of the agency’s response; and other charges, fees, and interest. As of March 1, 2015, BLM had billed about $441,000 for unauthorized grazing charges in grazing years 2010 through 2014 (see table 13). BLM had collected about $426,000 of the amount; after adjustments, about $8,000 of the charges remained due. BLM’s range management database contained records of nearly 53,000 grazing compliance inspections performed by agency field staff during grazing years 2010 through 2014 (see table 14). Of the nearly 53,000 inspections, about 1,500—or 3 percent—identified possible noncompliance. Possible noncompliance means noncompliance was suspected but not yet confirmed by the individual completing the compliance inspection and was identified for further investigation. Therefore some inspections recorded as a finding of possible noncompliance, upon further investigation, may not have resulted in a finding of a violation. BLM’s law enforcement database contained records of 426 incidents where formal documentation, such as an incident report (record of observation), warning notice, or violation notice, was prepared by a law enforcement officer and entered into the database in grazing years 2010 through 2014 (see table 15). The number of incidents ranges from 71 in Wyoming to 17 in Arizona and Utah. From grazing years 2010 through 2014, the year with the most incidents recorded in the law enforcement database was 2013; 123 incidents were recorded, or nearly 30 percent of the 426 total incidents. According to agency officials, some of the data may include incidents that were miscoded as grazing related when entered into the law enforcement database, and a small proportion of the incidents include violations of grazing permits other than unauthorized grazing, such as supplementing the existing forage with additional livestock feed. The Forest Service’s range management database contained records of 190 unauthorized grazing incidents in grazing years 2010 through 2014 (see table 16). The number of incidents is based on the number of bills issued and also includes some unauthorized grazing incidents confirmed by Forest Service field offices as having occurred where no bill was issued. Additional incidents may have occurred that were not billed and were not entered in the Forest Service database. The number of incidents ranged from 65 in the Southwestern Region to 2 in the Southern Region. The 190 incidents identified primarily by bills in the Forest Service’s range management database included charges for different types of unauthorized grazing incidents, excess use (by a permittee), and unauthorized use (by a nonpermittee) (see table 17). Excess use by permittees was the most common incident type in grazing years 2010 through 2014, accounting for 173—or 91 percent—of the incidents recorded; unauthorized use was 9 percent of the total. The Forest Service’s unauthorized grazing bills included charges for excess use and unauthorized use. The Forest Service collected a total of about $24,000 from these charges in grazing years 2010 through 2014; nearly $18,000 from excess use by permittees, and about $6,000 from unauthorized use by nonpermittees (see table 18). The amount collected includes credits used by livestock owners to pay excess or unauthorized use charges. The Forest Service’s law enforcement database contained records of 428 incidents where formal documentation, such as an incident report (record of observation), warning notice, or violation notice, was prepared by a law enforcement officer and entered into the database in grazing years 2010 through 2014 (see table 19). The number of incidents ranges from 102 in the Intermountain Region to 24 in the Pacific Northwest and Eastern Regions. From grazing years 2010 through 2014, the year with the most unauthorized grazing incidents recorded in the Forest Service’s law enforcement database was 2013; 100 incidents were recorded, or about 23 percent of the 428 total incidents (see table 20). In addition to the contact named above, Jeffery D. Malcolm (Assistant Director), Brad C. Dobbins, Karen (Jack) Granberg, and Katherine M. Killebrew made key contributions to this report. Important contributions were also made by Kevin S. Bray, Martin (Greg) Campbell, Elizabeth Martinez, Alana R. Miller, and Cynthia M. Saunders.
|
BLM, within the Department of the Interior, and the U.S. Forest Service, within the Department of Agriculture, are responsible for managing most of the nation's public rangelands. Ranchers must obtain permits or leases from the agencies to graze livestock on federal lands. Unauthorized grazing may take various forms, such as grazing more livestock than permitted or grazing without a permit. GAO was asked to examine unauthorized grazing. This report (1) describes what is known about the frequency and extent of unauthorized grazing, and its effects, and (2) examines the agencies' efforts to detect, deter, and resolve unauthorized grazing. GAO analyzed 5 years of the most recent data available on incidents where the agencies had taken formal action on unauthorized grazing (grazing years 2010 through 2014); examined federal laws and agency regulations, policies, and practices; and interviewed by telephone or site visit officials in a nongeneralizable sample of 22 agency field offices in eight western states where most unauthorized grazing had occurred. The frequency and extent of unauthorized grazing on Bureau of Land Management (BLM) and U.S. Forest Service lands are largely unknown because according to agency officials, the agencies prefer to handle most incidents informally (e.g., with a telephone call) and do not record them. The agencies' databases contained information on nearly 1,500 incidents of unauthorized grazing where formal action was taken by the agencies' range program or law enforcement staff for grazing years 2010 through 2014 (March 1 to February 28). Unauthorized grazing incidents were recorded in the agencies' databases when the agencies billed a penalty for unauthorized grazing or prepared a law enforcement report. However, agency staff told GAO that they handle most incidents informally—their preferred practice—and do not record them in databases or consistently in paper files, because, in part, they do not consider it a priority. As a result, the agencies have incomplete information on the extent of unauthorized grazing. Federal internal control standards call for clear documentation of all transactions and other significant events. Until the agencies require that all incidents of unauthorized grazing be recorded, including those incidents resolved informally, BLM and the Forest Service will not have a complete record of unauthorized grazing incidents with which to identify any potential pattern of violations. GAO found that the agencies' preferred practice of informally resolving unauthorized grazing is not provided for under agency regulations. Specifically, the regulations do not provide the flexibility to resolve incidents informally without a written notice of violation (in the case of BLM) and without charging unauthorized grazing penalties (in the case of the Forest Service). Most agency staff told GAO that informal resolution is the most effective way to resolve non-willful unauthorized grazing (e.g., when livestock stray outside of their permitted area and graze in an unauthorized area). As discussed in federal internal control standards, program operations are effective and efficient in achieving agency objectives when they produce the intended results and minimize the waste of resources. By amending regulations to establish a procedure for the informal resolution of minor infractions, the agencies could achieve the objective of efficiently resolving such incidents with minimal conflict within its regulatory authority. Alternatively, rather than amending their existing regulations to match their practices, the agencies' could change their practices to comply with their existing regulations. In addition, BLM and the Forest Service undertake similar efforts to detect and deter unauthorized grazing, such as conducting compliance inspections and assessing penalties for unauthorized grazing, but agency staff said that such efforts have limited effectiveness. For example, most of the Forest Service staff GAO interviewed said that unauthorized grazing penalties are too low to act as an effective deterrent. Under current policy, the Forest Services' unauthorized grazing penalty formula calculated a negative number or a number less than the permitted grazing fee for grazing years 2009 through 2012. By adopting an unauthorized grazing penalty structure that is, like BLM's, based on the current price of private forage, the Forest Service's unauthorized grazing penalty can better serve as a deterrent to such grazing. GAO is making six recommendations, including that the agencies take actions to record all incidents of unauthorized grazing, that they amend regulations to reflect their practices for resolving such incidents or comply with their regulations, and that the Forest Service revise its unauthorized grazing penalty structure. The agencies generally agreed with GAO's findings and recommendations.
|
USPS is an independent establishment of the executive branch mandated by law to provide postal services to “bind the nation together through the personal, educational, literary, and business correspondence of the people.” Established by the Postal Reorganization Act of 1970, USPS is a vital part of the nation’s communications network, delivering more than 200 billion pieces of mail each year. USPS is required to provide “prompt, reliable, and efficient services to patrons in all areas” and “postal services to all communities,” including “a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining.” In determining all policies for postal services, USPS is mandated to “give the highest consideration to the requirement for the most expeditious collection, transportation, and delivery of important letter mail.” Also, in selecting modes of transportation, USPS is mandated to “give highest consideration to the prompt and economical delivery of all mail.” More generally, USPS is mandated to provide adequate and efficient postal services that meet the needs of different categories of mail and mail users. USPS has designated improving service as one of its four goals in its Strategic Transformation Plan. USPS’s strategy to improve service is to “provide timely, reliable delivery, and improved customer service across all access points.” Specifically, USPS plans to improve the quality of postal services by continuing to focus on the end-to-end service performance of all mail. The quality of mail delivery service has many dimensions, including the delivery of mail to the correct address within a time frame that meets standards USPS has established for timely delivery. USPS also plans to ensure that postal products and services meet customer expectations and that all customer services and forms of access are responsive, consistent, and easy to use. USPS has long recognized the importance of customer satisfaction and measures the satisfaction of its residential and business customers on a quarterly basis. USPS reports that its customer satisfaction measurement, which is conducted by the Gallup Organization, provides actionable information to USPS managers by identifying opportunities to improve overall customer satisfaction. In addition to gauging overall customer satisfaction, USPS measures customer satisfaction related to specific postal functions such as mail delivery and retail service. As USPS recognizes, dissatisfied customers can seek and find alternatives to using the mail. USPS faces growing competition from electronic alternatives to mailed communications and payments as well as private delivery companies. In this challenging environment, establishing and maintaining consistently high levels of delivery service are critical to success. Recognizing the importance of the timely delivery of mail, USPS has integrated performance targets and results for some types of mail into its performance management system. This system is used to establish pay-for- performance incentives for postal management employees. As we have reported, high-performing organizations use effective performance management systems as a strategic tool to drive change and achieve desired results. Among the key practices used is aligning individual performance expectations with organizational goals by seeking to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. Further, high- performing organizations often must fundamentally change their cultures so that they are more results oriented, customer focused, and collaborative in nature. As we have reported, the benefit of collecting performance information is only fully realized when this information is actually used by managers to make decisions oriented toward improving results. Performance information can be used to identify problems and take corrective action; develop strategy and allocate resources; recognize and reward performance; and identify and share effective approaches. Practices that can contribute to greater use of performance information include demonstrating management commitment; aligning agencywide goals, objectives, and measures; improving the usefulness of performance information; developing capacity to use performance information; and communicating performance information clearly and effectively. Some USPS standards for timely mail delivery are inadequate because of limited usefulness and transparency. In general, these standards have not kept up with changes in the way that USPS and mailers prepare and process mail for delivery. Outdated standards are unsuitable as benchmarks for setting realistic expectations for timely mail delivery, measuring delivery performance, or improving service, oversight, and accountability. According to USPS, service standards represent the level of service that USPS strives to provide to customers. These standards are considered to be one of the primary operational goals, or benchmarks against which service performance is to be compared in measurement systems. USPS has established standards for the timely delivery of each type of mail; these specify the maximum number of days for “on-time” delivery based on the time of day, the location at which USPS receives the mail, and the mail’s final destination. For example, USPS standards for 1-day delivery require the mail to be received by a specified cutoff time on the day that the mail is accepted, which varies depending on geographic location and where the mail is deposited (e.g., in a collection box, at a post office, or at a mail processing facility). In most cases, 1-day mail deposited before the cutoff time is considered to be delivered on time if it is delivered on the next delivery day, which generally excludes Sundays and holidays. USPS delivery standards vary according to the priority of delivery. Express Mail has the highest priority, followed by Priority Mail, other First-Class Mail, Periodicals, Package Services (e.g., packages sent via Parcel Post), and Standard Mail. Postal officials, including the Postmaster General, told us that differences in postage rates for different types of mail reflect differences in delivery standards and priority. The Postmaster General noted that variability in the delivery standards and timing of delivery is built into USPS’s pricing structure. He noted that lower-priced mail with lower delivery priority receives more variable delivery; this includes mail such as Standard Mail which receives discounts for presorting by ZIP Code and destination entry that is generally closer to where the mail is delivered. For example, USPS can defer the handling of Standard Mail as it moves through its mail processing, transportation, and delivery networks. Thus, some pieces of a large mailing of Standard Mail may be delivered faster than others. The Postmaster General explained that this variability of delivery is consistent with the relatively low rates afforded to mailers of Standard Mail, who pay lower rates than mailers of First-Class Mail. In addition, standards for types of mail within each class can vary. For example, Parcel Select, a type of Package Service, has a faster delivery standard than other Package Services because it is made up of bulk shipments of packages entered into USPS’s system close to the final destination. Delivery standards for each class and type of mail are summarized in table 1 and described in greater detail in appendix II. Some USPS delivery standards lack usefulness—notably, the delivery standards for Standard Mail, Periodicals, and most Package Services mail—because they have not been systematically updated in many years and do not reflect USPS’s operations or intended service. These standards are loosely based on distance and have tended to remain static despite changes in USPS networks, operations, and operational priorities. The delivery standards for Standard Mail are outdated. Although delivery standards are supposed to represent the level of delivery service USPS strives to provide to customers, differences between delivery standards and operational policies and practices for delivery service are evident for Standard Mail. For example, USPS operational policies state that Standard Mail entered at the delivery unit, where carriers pick up mail for final delivery, should be delivered in 2 days, whereas the standards call for such delivery in 3 days. Also, depending on mail preparation, such as presorting and destination entry, mail can be delivered faster than the standard. These differences can impede clear communication to mailers concerned with setting realistic expectations for when Standard Mail will be delivered and determining how to maximize the value of their mail. Correctly anticipating when advertising mail will be delivered is important to business planning and profitability. For example: Local retailers, ranging from department stores to restaurants, need realistic expectations as to when advertising mail will be delivered in order to effectively promote sales and plan for the appropriate level of staffing and inventory. To maximize customer response, retailers send advertising mail so that it will be received shortly before a sale—soon enough for potential customers to plan to shop during the sale, but not so early that they will forget about the sale. Also, if the advertising is delivered far in advance of a weekly sale, it can generate demand that is difficult to meet with available resources. Catalog companies also need realistic expectations about when catalogs will be delivered in order to plan for call center staffing and inventory. Thus, reliable and predictable delivery of advertising mail helps businesses efficiently schedule staff and inventory to respond to fluctuations in demand. Anticipating the level of inventory has become more important over time with the trend toward just-in-time inventory that helps minimize storage and financing costs. However, the delivery standards for Standard Mail are not adequate for advertisers to set realistic expectations for mail delivery, in part because these standards do not reflect some operational policies and practices that can lead to mail being delivered faster or slower than the standards call for. Substantial changes have occurred in how mailers prepare Standard Mail and how USPS processes it, but these changes are not reflected in the standards. Today, most Standard Mail is presorted and entered into the postal system close to its destination. The degree of presorting and destination entry alters the amount of handling it receives by USPS and potentially speeds or slows delivery. For example: Presorting: Beginning in 1979, USPS provided discounts to mailers who reduce USPS’s processing costs by presorting their Standard Mail to the level of carrier delivery routes—discounts extended in 1981 to Standard Mail presorted to the level of individual ZIP Codes. In fiscal year 2005, most Standard Mail was presorted by carrier routes (35 percent) or by individual ZIP Codes or ZIP Codes starting with the first three digits (57 percent). Mail that is presorted by carrier route can move through USPS’s system faster than mail that is presorted by groups of ZIP Codes because it does not need as much handling by USPS. However, the delivery standards for Standard Mail do not take presorting into account. Destination entry: Starting in 1991, USPS gave destination entry discounts for mailers that deliver their Standard Mail to a postal facility that generally is closer to the mail’s destination, such as the delivery unit facility where carriers pick up their mail or the local mail processing center that forwards mail to these facilities. Mail that is entered at a destination facility is delivered faster than other Standard Mail because it avoids some USPS handling and USPS assigns a low priority to handling Standard Mail. However, the impact of destination entry is not reflected in the delivery standards. For example, the delivery standards continue to call for delivering all Standard Mail in 3 days or more, whereas the Postal Operations Manual states that Standard Mail that mailers enter at delivery units should be delivered in 2 days. USPS also works with mailers to deliver their Standard Mail within a range of dates that they request. Advertising mailers can request that their advertising be delivered within this range—known as the “in home” dates. As mentioned earlier, predictable delivery helps advertisers to plan their resources and inventory. Requesting “in home” dates may result in delivery that is faster or slower than the standard. The Postal Operations Manual states that in such cases, delivery units should attempt to meet the “in home” dates rather than the delivery standards. According to USPS, its delivery standards are supposed to be the benchmark against which delivery performance is compared and reflects the level of service that USPS strives to provide. In this case, however, the delivery standards for Standard Mail would not be a suitable benchmark for measuring delivery performance, because they do not reflect USPS operations. USPS provided mailers with guidelines in 2000 that recognized that Standard Mail can be delivered faster than the standard, depending on its level of presorting, and on whether the mailers deliver it closer to its destination. The guidelines presented a table for the speed of Standard Mail delivery depending on how the mail was presorted and where it entered the mail processing network. However, USPS did not consider these guidelines to be part of its delivery standards for Standard Mail, and according to USPS, these guidelines are now obsolete. Nevertheless, USPS officials told us that USPS continues to maintain internal guidelines for the desired delivery speed for Standard Mail, depending on its level of presorting and where it enters the postal network. In 1992, 1997, and 1999, various committees composed of USPS officials and mailers recommended that delivery standards be improved for Standard Mail and other types of mail. In 1999, a working group of USPS officials and mailers recommended that the delivery standards for Standard Mail be updated to reflect how it is presorted and where the mail enters the postal system. USPS did not implement these 1999 recommendations and offered no explanation on why it did not. Then, when we met with Postmaster General in June 2006, he told us that it would be difficult for USPS to update its standards to reflect the wide variety of differences in mail preparation and processing, and that it might have an impact on the rates for some types of mail, to which he believes the mailers would object. In contrast, the Association for Postal Commerce (PostCom), a major mailer group, wrote the following to us in March 2006: “It is PostCom’s belief that the development and publication of service standards based on existing USPS operations and networks is a critical first step toward the development of any service performance measurement system. There is no barrier to moving forward with defining service standards for all classes of mail.” PostCom noted it actively supported the efforts of the 1999 working group, and said its recommendations—which included calling for standards based on existing mail processing and transportation environments, which for bulk mail would also reflect mail preparation and entry point—“largely still apply.” Because outdated delivery standards are an impediment to measuring and improving delivery performance, updating these standards could help increase the value of Standard Mail to businesses that mail advertising. As previously noted, understanding when Standard Mail will be delivered helps mailers send this mail so it will be delivered at what they consider to be the optimum time and helps them to plan for staff and inventory. In addition, updating the delivery standards for Standard Mail would provide an appropriate benchmark for measuring Standard Mail delivery performance. For some of the same reasons as Standard Mail, delivery standards are likewise outdated for most Package Services mail. Delivery standards for most Package Services also date to the 1970s and are generally distance- based. These standards are predicated on USPS’s national network of Bulk Mail Centers (BMCs) that accept and handle packages. USPS told us that the delivery standards for Package Services “are changed infrequently since the BMC network has not been appreciably altered since its inception in the 1970s.” Since the 1970s, USPS has implemented many changes regarding the handling of packages, including discounts for presorting Package Services items to the carrier route or ZIP Code, as well as discounts for destination entry. However, these changes have not been reflected in changes to the Package Services standards. A noteworthy exception involves useful delivery standards that USPS created for a specific type of Package Services mail called Parcel Select, when it was introduced in 1999. These standards were updated in 2002. USPS’s standards for Parcel Select differentiate speed of delivery by point of entry, e.g., 1 day for entry at the destination delivery facility or 2 days for entry at the mail processing center that forwards the parcels to the delivery facility. These standards were intended to provide an appropriate benchmark for delivery performance measurement in order to facilitate efforts to improve the delivery performance for this mail. USPS subsequently collaborated with officials of the Parcel Shippers Association (PSA) to implement delivery performance measurement for Parcel Select against these standards, and the results are factored into individual pay- for-performance incentives for many USPS managers. Both USPS and PSA officials told us that incorporating delivery performance results into these incentives—which was possible due to useful performance standards and measures—was a primary reason why on-time delivery performance has improved for Parcel Select. They said that as a result of improved delivery performance, Parcel Select has been able to maintain its viability as a low-cost alternative for lightweight packages within the competitive packages market. In this regard, we have also reported that both establishing and maintaining consistently high levels of delivery service are critical to USPS’s success in an increasingly competitive marketplace. Further, we have noted that USPS had lost Parcel Post business to private carriers, who had come to dominate the profitable business-to-business segment of the market because they offered cheaper and faster service. Parcel Select provides destination entry discounts for bulk mailings of Parcel Post. Most of Parcel Select’s volume is tendered to USPS by a handful of third-party consolidators who receive packages from multiple companies and consolidate their volume to enable cost-effective destination entry. By entering parcels closer to their destination, the consolidators speed delivery and narrow the delivery window. However, prior to measuring and improving the delivery performance of Parcel Select, mailers considered Parcel Select to be a low-cost service with a reputation for low quality delivery. The delivery performance data has been used to identify delivery problems in a timely manner, such as problems in timely delivery of Parcel Select in specific geographic areas, so that corrective action could be taken to maintain and improve delivery performance. USPS actions to improve the performance of Parcel Select are consistent with practices we have reported are used by high- performing organizations: using performance information and performance management systems to become more results oriented, customer focused, and collaborative in nature; identify problems and take corrective action; and improve effectiveness and achieve desired results. As with Standard Mail and most Package Services, delivery standards are outdated for Periodicals that are delivered outside the local area from which they are mailed. The distance-based concept for Periodicals standards has remained the same since the 1980s and does not reflect mailers presorting mail by carrier route or ZIP Code or destination entry of mail at destination facilities. Like Standard Mail, USPS told us that the Periodicals delivery standards are meant to represent the maximum service standard targets for mail that is not presorted. However, the impact of presorting has not been incorporated into the Periodicals delivery standards. In contrast, to USPS’s credit, it has updated its 1-day delivery standards for Periodicals delivered within the local area where they are mailed. Further, it generally updates the standards at the same time for Periodicals and First-Class Mail that originate and destinate in the same local area so that the scope of 1-day delivery remains the same for both types of mail. Looking forward, USPS plans to change the way its mail processing and transportation networks handle Periodicals mail this summer, which USPS officials said will lead to changes in some Periodicals delivery standards so that they reflect current operations. They said that Periodicals that are moved via ground transportation, which make up a majority of all Periodicals volume, will be combined with First-Class Mail. As a result, these Periodicals should receive comparable handling and faster delivery times than is currently the case. According to Periodicals mailers, inconsistent delivery performance that does not meet customer expectations causes renewal rates to decline and leads to customer service calls that are costly to handle. According to USPS officials, implementation of these planned changes to postal operations and standards can be expected to result in updating many of the specific standards for Periodicals mailed between specific pairs of ZIP Codes. Some of the specific delivery standards for Priority Mail may also need to be updated because they do not reflect USPS’s operations. According to the Deputy Postmaster General, some Priority Mail delivery standards call for on-time delivery of Priority Mail in 2 days, but it is often physically impossible for USPS to meet these standards when that requires moving the mail across the country. As we reported in 1993, officials of the Postal Inspection Service questioned whether Priority Mail could be delivered everywhere within the continental United States within 2 days, which was then the delivery standard. USPS has since established 3-day delivery standards for some Priority Mail, but these standards cover less than 5 percent of Priority Mail volume. USPS officials told us that USPS may make changes to some of the specific Priority Mail standards for mail sent between specific pairs of ZIP Codes so that the standards reflect USPS operations. USPS has updated its standards for First-Class Mail over the years with the intent of reflecting its operations. However, questions have been raised in PRC proceedings and advisory opinions about some of the changes. By way of background, when USPS decides on a change in the nature of postal services that will generally affect service on a nationwide or substantially nationwide basis, USPS is required by law to submit a proposal, within a reasonable time frame prior to its effective date, to PRC requesting an advisory opinion on the change. In 1989, USPS submitted a proposal to PRC for an advisory opinion that involved a national realignment of the delivery standards for First-Class Mail. This realignment involved downgrading the delivery standards for an estimated 10 to 25 percent of First-Class Mail volume, so that these standards would reflect actual operations or planned changes to operations. In general, these delivery standards were proposed to be downgraded by reducing the size of 1-day delivery areas, thereby downgrading some mail to 2-day service, and likewise reducing the scope of 2-day delivery, thereby downgrading some mail to 3-day service. USPS also stated that it would make changes to its operations, including moving some First-Class Mail by truck instead of by air, and that it expected to provide more reliable service as a result. PRC advised against adoption of USPS’s proposed national realignment, explaining that its review suggested the realignment may be an excessive reaction to what may be localized problems on a limited scale. PRC questioned if the proposed realignment could bring about significant improvement in delivery service commensurate with its effect on mail users. However, PRC agreed that existing delivery standards could not be met in certain areas, such as the New York City metropolitan area, and on that basis, said that some specific localized changes to the service standards to correct anomalies and major problem areas would be a sensible path for USPS to pursue. USPS proceeded to implement a national realignment to its First-Class Mail standards from 1990 to 1992. In 2000 and 2001, USPS again changed many of its First-Class Mail standards in a manner that USPS said would have a nationwide impact on service, including downgrading some standards from 2 days to 3 days in the western United States and upgrading other standards. USPS reported that these changes were intended to provide consistent and timely delivery service for 2-day and 3-day mail. USPS also reported that the changes reflected a general trend toward making 2-day zones more contiguous, more consistent with the “reasonable reach” of surface transportation from each originating mail processing facility, and potentially less dependent on air transportation—which had lacked reliability. USPS did not seek a PRC opinion on these changes in the year before implementation. After a lengthy proceeding regarding the 2000 and 2001 changes, PRC issued an advisory report earlier this year that suggested that USPS reconsider its First-Class Mail standards, stating that the service resulting from the realignment cannot be said to be sufficient to meet the needs of postal patrons in all areas as required by law and that USPS did not consistently adhere to the statutory requirement to give highest consideration to expeditious transportation of important letter mail. PRC urged USPS to give more effective public notice about First-Class Mail delivery standards, such as through Web-site postings and collection box labels. More generally, PRC also urged USPS to actively engage the public in major policy decisions and fully inform the public about matters of direct interest that affect USPS operations. PRC said that USPS, as a government monopoly, has a positive obligation to learn the needs and desires of its customers and to structure its products to meet them where doing so is not inconsistent with reasonably feasible and efficient operations. In February 2006, USPS sought a PRC advisory opinion, which is pending, in connection with USPS’s realignment of its mail processing and transportation networks. USPS is currently planning and implementing a nationwide realignment of its mail processing and transportation networks. According to USPS, its long-term operational needs will be met best if its mail processing network evolves into one in which excess capacity is reduced and redundant operations and transportation are eliminated. USPS stated that it is not proposing to change the long- standing delivery standard ranges for any particular mail class; however, any changes to delivery standards that affect the expected delivery times from origin to destination between particular 3-digit ZIP Code pairs will be made incrementally as USPS implements changes to its networks. USPS also stated that the overall magnitude and scope of potential service standard upgrades and downgrades for any particular mail class cannot be known until numerous feasibility reviews have been conducted and operational changes implemented over the next several years. However, USPS stated that it expected that changes to its delivery standards are likely to be most pronounced for First-Class Mail and Priority Mail. USPS has also made changes to its delivery standards for Express Mail to reflect changes in operations. Similar to the delivery standards for First- Class Mail, those for Express Mail were discussed in a PRC proceeding after USPS implemented changes to them. In April 2001, USPS reduced the scope of the overnight delivery network for Express Mail sent on Saturdays and the eve of holidays. According to USPS, it had contracted with FedEx to provide more reliable air transportation for Express Mail; but, because FedEx provided no service on Saturday or Sunday nights and some federal holidays, USPS changed its delivery plans for mail pieces accepted on Saturdays and the eve of holidays. Earlier this year, PRC issued an advisory report that found the changes to the Express Mail network had affected service on a substantially nationwide basis in 2001. PRC criticized the lack of public notice before the changes were made, but unlike its advisory opinions on changes to First-Class Mail standards, did not criticize the changes that USPS made to its Express Mail standards. Over the past year, the House and Senate have passed postal reform legislation that would clarify USPS’s delivery standards. The House-passed legislation would require USPS to annually report its delivery standards for most types of mail and the level of delivery service provided in terms of speed and reliability. The Senate-passed legislation included more detailed requirements regarding delivery service standards. This bill would require USPS to establish “modern service standards” within 1 year after the bill is enacted. These standards would have four statutory objectives: (1) to enhance the value of postal services to both senders and recipients; (2) to preserve regular and effective access to postal services in all communities, including those in rural areas or where post offices are not self-sustaining; (3) to reasonably assure USPS customers of the reliability, speed, and frequency of mail delivery that is consistent with reasonable rates and best business practices; and (4) to provide a system of objective external performance measurements for each market-dominant product (e.g., mail covered by the postal monopoly) as a basis for measuring USPS’s performance. In addition, USPS would be required to take into account eight statutory factors in establishing or revising its standards: (1) the actual level of service that USPS customers receive under any service guidelines previously established by USPS or service standards established under the new statutory system; (2) the degree of customer satisfaction with USPS’s performance in the acceptance, processing, and delivery of mail; (3) the needs of USPS customers, including those with physical impairments; (4) mail volume and revenues projected for future years; (5) the projected growth in the number of addresses USPS will be required to serve in future years; (6) the current and projected future costs of serving USPS customers; (7) the effect of changes in technology, demographics, and population distribution on the efficient and reliable operation of the postal delivery system; and (8) the policies of Title 39 (i.e., the postal laws) and such other factors as USPS determines appropriate. Like the House-passed bill, the Senate-passed bill would require USPS to annually report on the speed and reliability of delivery of most types of mail. In explaining the rationale for these requirements regarding delivery standards and service, sponsors of the Senate bill stated that the new standards would improve service, be used by USPS to establish performance goals, and continue to ensure daily delivery to every address, thereby preserving universal service. A Senate committee report on an earlier version of these requirements stated that they were intended to ensure that the service USPS provides is consistent with the statutory definition of universal service, as well as preserving and enhancing the value of postal products. In this regard, the report expressed concern that USPS may be tempted to erode service quality in an effort to cut costs, and stated that the reporting requirements would provide information to enable the postal regulator and all interested parties to evaluate the provision of service, with the service standards serving as a benchmark for measuring USPS’s performance. Although USPS has recently provided information related to its delivery standards in ongoing PRC proceedings, USPS has not made all of this information easily accessible to all business mailers and the public. As a result, some customers are hindered from making informed decisions about different mailing options with varying rates and service, as well as from assessing USPS’s delivery performance. Although USPS does have a CD-ROM with information about its delivery standards that is freely available to those who are aware of its existence, information about how to order the CD-ROM is not easily accessible on its Web site. The CD-ROM contains delivery standards for some types of mail, such as Standard Mail and Periodicals, which are not available on its Web site. Looking forward, USPS has the opportunity to further expand the accessibility of information on its delivery standards, much as USPS has done to improve the transparency of its financial information in recent years. For example, in an ongoing PRC proceeding, USPS provided new narrative summaries that explain its detailed standards; these summaries are posted on the PRC Web site, but not on the USPS Web site. USPS’s delivery performance measurement and reporting is inadequate— in part because its delivery performance information is incomplete, since representative measures of delivery performance do not cover most mail, and in part because its reporting of this delivery performance information is deficient (see table 2). USPS tracks some mail pieces for diagnostic purposes, and plans to have more data available as it deploys automated equipment to sort flat-sized mail into the order it is delivered. However, a number of impediments have limited USPS’s ability to track mail. The diagnostic data is not representative and does not amount to delivery performance measurement. Although USPS recently added a section on domestic delivery performance to its Web site, it does not provide complete performance information for some types of mail. Without complete information, USPS and mailers are unable to diagnose delivery problems so that corrective action can be implemented. In addition, stakeholders cannot understand how well USPS is fulfilling its basic mission, nor can they understand delivery performance results and trends. Deficiencies in measurement and reporting also impair oversight and accountability by PRC and Congress. USPS has not established a complete set of quantitative measures for delivery performance, largely because its delivery performance measurement covers less than one-fifth of its total mail volume—that is, only Express Mail and parts of First-Class Mail, Priority Mail, Package Services, and International Mail. USPS does not measure delivery performance for the remaining volume, which includes Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services. In addition, the External First-Class Measurement System (EXFC) is limited to single- piece First-Class Mail deposited in collection boxes in selected areas of the country (see fig. 1). Thus, as USPS has reported, EXFC is not a systemwide measurement of all First-Class Mail performance. USPS has stated that it has strong business and operational reasons for using this EXFC methodology and that the areas selected for testing ensure coverage of its highest-volume areas. These reasons include EXFC covering areas from which most First-Class Mail originates and destinates, the ability of EXFC to provide results for specific geographic areas, and practical advantages for collecting data from fewer areas of the nation. Similarly, delivery performance data for Priority Mail are limited because they only cover Priority Mail volume entered at post offices and other retail facilities, and for which mailers purchase Delivery Confirmation Service. Such mail constitutes only 4 percent of all Priority Mail volume. According to USPS officials, USPS expects the volume of this Priority Mail to increase, which would increase the scope of delivery performance measurement. They said that this measure, which replaced the former Priority End-to-End (PETE) measurement system at the beginning of fiscal year 2006, covers all types of Priority Mail, including letters, flat-sized mail, and parcels. However, USPS officials also told us that USPS cannot currently measure the delivery performance for bulk quantities of Priority Mail with Delivery Confirmation, such as business mailings of merchandise, because USPS does not have accurate data on when the mail entered into its system. On the positive side, USPS has implemented delivery performance measurement for Parcel Select and some types of International Mail, both of which operate in a highly competitive marketplace. It has used this measurement to establish targets and identify opportunities to improve service. Although these products are a small fraction of mail volume, USPS has developed delivery performance measures to address customer needs for timely delivery. Highlights for measurement of major types of mail are listed in table 3. As a result of the measurement gaps listed above, measurement is not sufficiently complete to understand how well USPS is achieving the following: performing its statutory mission of providing prompt and reliable service to patrons in all areas, including prompt delivery of all mail; delivering mail with different delivery standards, which helps fulfill the requirement that USPS provide mail service to meet the needs of different categories of mail and mail users; providing expeditious handling of important letter mail, such as bills and statements sent via First-Class Mail; fulfilling its statutory requirement to provide a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining; and identifying delivery problems, understanding the causes, and improving performance. The lack of any representative delivery performance data for most mail volume increases the financial risk to USPS, which faces increasing competition. If mailers are not satisfied with USPS’s delivery service, they could take their business elsewhere. For example, Standard Mail and bulk First-Class Mail are the largest segments not measured, collectively accounting for close to three-quarters of mail volume and half of mail revenues. Standard Mail is USPS’s key growth product, but it must compete against multiple advertising media in a dynamic and highly competitive marketplace. Bulk First-Class Mail covers a significant share of USPS’s overhead costs—including maintaining the retail and delivery networks—but is vulnerable to electronic communications and payment alternatives. In addition, USPS does not have representative delivery performance measures for Periodicals, which help USPS fulfill its statutory mandate to provide postal services to “bind the nation together” through business, educational, and literary correspondence; and for Package Services, such as Parcel Post, which provides the public with a low-cost option for sending packages. Incomplete information also impedes USPS’s potential for holding its managers accountable for delivery performance of all types of mail and for balancing increasing financial pressures with the need to maintain quality delivery service. Because delivery performance is measured for only some types of mail, and individual performance incentives are linked to the results, some mailers are concerned that in practice, this may skew delivery priorities and performance so that timely delivery is more important for the mail whose performance is measured than mail whose performance is not measured. For example, as we have reported, soon after USPS implemented its EXFC measurement system for First-Class Mail deposited into collection boxes, USPS increased its emphasis on timely First-Class Mail service. USPS managers at the local post office level were instructed to concentrate on particular activities that could improve EXFC scores, and more emphasis was placed on picking up mail from collection boxes on schedule. Conversely, measurement gaps may impede effective collaborative efforts with mailers to quickly identify and resolve delivery problems, because both USPS officials and mailers have limited information for diagnostic purposes. In addition, measurement gaps impede the ability of external stakeholders, including Congress and PRC, to monitor accountability and exercise oversight. Measurement gaps cause PRC to consider proposed postal rates without adequate information on the actual value of the service provided for each class of mail, which PRC by law must consider when recommending postal rates. In addition, PRC is hindered in considering USPS’s proposals for changes in the nature of postal services that are nationwide or substantially nationwide in scope, including the ongoing proceeding related to USPS’s network realignment. USPS’s limited performance measurement also affects USPS’s reporting of its delivery performance and does not provide adequate transparency so that customers can understand performance results and trends. Although USPS recently made additional delivery performance information available on its Web site, it still does not communicate its delivery performance for all of its major types of mail, particularly those covered by its statutory monopoly to deliver letter mail. The main gap in USPS’s reporting of delivery performance results, as shown in table 4, continues to be for mail entered in bulk quantities, including Standard Mail and bulk First-Class Mail, which collectively constitute most of USPS’s mail volume and revenues. USPS also does not report delivery performance results for Periodicals and most Package Services. As previously discussed, USPS generally does not collect information on delivery performance results for these types of mail. USPS’s reporting of delivery performance information has not adequately met information needs for congressional oversight purposes. Notably, USPS’s practices for reporting delivery performance information in its annual Comprehensive Statement on Postal Operations fall short of the longstanding statutory requirement for “data on the speed and reliability of service provided for the various classes of mail and types of mail service.” This requirement was enacted due to “the need for effective oversight of postal operations to ensure that the postal services provided the public shall continue at an effective level and at reasonable rates.” Specifically, USPS has not included data on the speed and reliability of any entire class of mail in its annual Comprehensive Statement on Postal Operations. Instead, USPS has presented only national EXFC data, even though it collected data on timely delivery performance for all Express Mail, as well as some Priority Mail. The 2005 Comprehensive Statement on Postal Operations stated “while Express Mail and Priority Mail performance is tracked and has improved during the past 5 years, because these products are competitive, the data was considered proprietary and not published.” However, USPS reached an agreement with the PRC’s Office of Consumer Advocate last year to end this restriction and recently began reporting some delivery performance data on a newly created page on its Web site for some Express Mail, Priority Mail, First-Class Mail, and Package Services. Moreover, USPS’s reporting practices under the Government Performance and Results Act (GPRA) of 1993 have provided less and less performance information for oversight purposes. USPS’s latest GPRA report, which was included in its 2005 Comprehensive Statement on Postal Operations, provided delivery performance targets (also referred to as performance goals) and results only for First-Class Mail measured by EXFC at the national level, with little accompanying explanation. For example, USPS reported that 87 percent of 3-day EXFC mail was delivered on time in fiscal year 2005, which did not meet its GPRA target of 90 percent, but USPS did not explain, as required by GPRA, why this specific target was not met. USPS also did not explain whether it considers the 90-percent goal—which remains unchanged for fiscal year 2006—impractical or unfeasible, or, alternatively, what plans USPS has for achieving this goal. USPS’s reporting of delivery performance information on its Web site has recently improved but is still incomplete because it does not include performance results for all major types of mail. In April 2006, USPS posted delivery performance information on a newly created page of its Web site, including selected results for the timely delivery of some Express Mail, Priority Mail, First-Class Mail, and Package Services. This information is oriented to members of the general public who make decisions on how to mail parcels and other items that can be sent using different types of mail. To facilitate such use, the information is linked to USPS’s Postage Rate Calculator and is accompanied by brief summaries of the applicable delivery standards for each type of mail. The new information addresses USPS’s written agreement with PRC’s Office of the Consumer Advocate in the 2005 rate case, which was implemented after further discussions between the two parties. USPS’s recent disclosures are a good step toward providing easily accessible information on delivery performance results on its Web site for key types of mail used by the public. The information on delivery performance results, however, did not cover major types of mail that are not measured—Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services. Further, the information provided to the public was limited. First, performance results covered only the most recent quarter, although results for some types of mail have varied by 7 percentage points or more from one quarter to another within the same fiscal year. Second, only partial information was provided for Priority Mail and Package Services. For example, the results for Priority Mail covered only 4 percent of total Priority Mail volume. This limited scope of measurement was not disclosed on USPS’s Web site. Without more complete reporting of delivery performance information, Congress and the American public do not have adequate information to determine how well USPS is accomplishing its mission of providing prompt and reliable delivery services. For the future, a possible model to enhance the completeness and usefulness of USPS’s reporting of delivery performance information would be to provide some information similar to the financial information that USPS already provides on its Web site. In the financial area, USPS has instituted a dedicated USPS Web page that has links to its financial reports, related reports and data, and timely disclosure of important developments. USPS also improved the quarterly financial reports that provide explanations for results and trends, as well as its financial outlook. USPS has made slow and inadequate progress in modernizing its delivery standards and in implementing delivery performance measurement for all major types of mail. USPS’s limited progress has left major gaps in each of these areas, despite numerous recommendations for improvements that have been made in these areas over the years, including those by USPS- mailer task forces and working groups, as well as some USPS initiatives to develop delivery performance measurement. Without management commitment and effective collaboration with mailers, it will be difficult for USPS to overcome technical challenges and achieve progress and results that are in the interest of both USPS and its customers in today’s competitive marketplace. Some of USPS’s and the mailers’ collaboration efforts over the years have resulted in successes; but key recommendations from these efforts have yet to be realized. A broad cross section of mailer groups and mailers who met with us shared their concerns about delivery standards and related information; delivery performance measurement and reporting; and implications of delivery performance information and gaps in this area. They expressed frustration with the slow pace of USPS’s progress in improving delivery performance information. As one mailers’ association recently wrote, “We do not expect the USPS to move tomorrow to the ultimate service performance measurement system, but the total lethargy to take any step forward is unacceptable.” Also, “the Postal Service’s lack of clockwork-like predictability is the number one reason for repeated industry calls for standards and measurements.” Many recommendations for improving performance information were made by committees that comprised USPS and mailers, as noted in table 5 below. Some notable examples include the 1992 Competitive Services Task Force, the 1997 Blue Ribbon Committee, and the 1999 follow-up effort by a USPS-mailer working group. We asked USPS what actions, if any, it had taken on the 1999 recommendations, but we did not receive a response. Multiple impediments have contributed to USPS’s slow progress toward implementing delivery performance measurement for all major types of mail. The most important impediment is the lack of management commitment and effective collaboration with the mailing industry to follow up on recommendations for improvements and to resolve issues between USPS and mailers. Additional impediments include technological limitations, limited mailer participation in providing information needed to facilitate performance measurement, data quality deficiencies, and costs. USPS has not provided management commitment and effectively collaborated with mailers to develop delivery performance measures for all major types of mail. To achieve effective collaboration, it is necessary to build consensus among diverse mailers with different information needs, as well as between mailers and USPS. Such a challenge requires leadership and an effective process for follow up, particularly given the complexity of measurement issues and the time frame that likely will be required to overcome longstanding issues. Based on our discussions with mailers and postal officials, some of the commitment and collaboration challenges have included: USPS has lacked commitment to implementing delivery performance measurement and reporting for all major types of mail; particularly, as some mailers told us, USPS has tended to resist greater transparency, oversight, and accountability. A USPS senior vice president told us that USPS had no plans for implementing additional measures of delivery performance. A second USPS senior vice president explained that although some pieces of mail may be tracked as automated equipment reads barcodes on the mail, enabling more information for management and diagnostic purposes, these pieces are unrepresentative, and USPS has no plans for using mail tracking data to develop representative measures of delivery performance. As for major types of mail that are not measured, USPS has publicly reported that it has no system in place for measuring service performance for Standard Mail on a systemwide basis and currently has no plans for the development of such a system. Similarly, USPS officials told us that it has no plans to develop representative measures of delivery performance for bulk First-Class Mail, which, after Standard Mail, is the second-largest volume of mail that is not measured. Further, USPS stated in its Strategic Transformation Plan that it would be prepared to extend performance measurement and reporting to additional mail classes as it achieves high levels of delivery service performance. A USPS vice president told us that USPS agreed in 2005 to begin reporting delivery performance results on its Web site for Express Mail and Priority Mail because USPS had already improved delivery performance for these types of mail to high levels, and therefore the results could help USPS promote these types of mail. This statement contrasts with a general performance principle that a major use, if not the major use, of regularly collected outcome information should be by program managers themselves to improve the effectiveness of their programs. As we have reported, the benefit of collecting performance information is only fully realized when this information is actually used by managers to make decisions oriented toward improving results. Although many groups have issued recommendations to USPS, follow- through on key recommendations did not occur. USPS often did not officially respond to the recommendations at the time they were made and did not implement the recommendations, so it was not clear whether USPS agreed or intended to implement the recommendations. Moreover, once a group completed its report with recommendations to USPS, it disbanded, which limited the continuity that otherwise could have been helpful for follow-up. Effective collaboration has been impeded by USPS’s resistance to sharing some diagnostic data it collected with mailers. In general, USPS has maintained that delivery performance data below the national level are proprietary, such as data on performance related to any particular mail processing facility or transportation segment. Therefore, according to USPS, it should not be required to publicly disclose these data in PRC proceedings in response to requests by any interested party. However, voluntarily sharing diagnostic delivery performance information with mailers experiencing delivery problems could be useful for both USPS and mailers to collaboratively develop an understanding of whether the problems are limited to particular mailings or are systemic—resulting from specific USPS operational problems. Such an understanding can help in identifying the cause of delivery problems and in implementing corrective action. Although USPS representatives may communicate with mailers about these problems, the mailers told us they often lack sufficient timely and actionable data on delivery problems. They have called for USPS to share more aggregate delivery performance information. The absence of management commitment and effective collaboration matters for the future because give-and-take by both USPS and mailers will be required to achieve consensus on designing measurement systems that meet different information needs, finding ways to cover the associated USPS costs, increasing mailer participation in providing information needed to facilitate performance measurement, and overcoming remaining impediments to implementing valid measurement systems. In this regard, we are encouraged that USPS has engaged in collaborative efforts to improve performance measurement for Parcel Select, starting with the Deputy Postmaster General reaching out to the Parcel Shippers Association (PSA), which represents major Parcel Select mailers, and offering to engage in collaborative efforts. The Deputy Postmaster General assigned responsibility to a single manager for follow- up. USPS followed through by reaching consensus on standards, performance measurement, and the sharing of aggregate data, which required actions by both USPS and mailers to successfully implement. According to PSA officials, the standards, measures, and performance incentives have led to a marked improvement in delivery performance for Parcel Select; and, as a result, USPS has been able to maintain its viability within the competitive package services market. The USPS official with responsibility in this area made similar comments. In addition, USPS recently proposed requiring mailers to barcode some Parcel Select items; if this increases barcoding, it will facilitate delivery performance measurement. USPS’s Parcel Select provides a successful model for updating the delivery standards for other types of mail, implementing delivery performance measurement, and holding USPS accountable for results. Similarly, USPS worked with other stakeholders to implement delivery performance measurement for Global Express Mail, which is managed by an international organization called the Express Mail Service (EMS) Cooperative. Timely delivery of EMS items, including Global Express Mail, has reportedly improved since delivery standards and measurement were implemented. Several other impediments have limited the development of delivery performance measures for all major types of mail. Two key impediments involve limitations in technology, which limited USPS’s ability to track mail from entry to delivery; and limited mailer participation in providing information needed to facilitate performance measurements, which limited the representativeness of the performance data collected. In addition, data quality deficiencies and cost concerns have impeded progress. Technological limitations. USPS has not fully implemented technology that will enable it to track barcoded mail through its mail processing and transportation networks that could play a part in measuring performance when completed. Although some implementation, such as upgrading barcodes for individual mail pieces and mail containers, is under way, full implementation will take years. According to the Deputy Postmaster General, USPS expects to make substantial progress in resolving these technological limitations over the next 5 years. For example, near the end of this decade, USPS is planning to install new automated equipment to sort flat-sized mail, such as large envelopes and catalogs, into the order it is delivered, which promises to greatly expand the automatic scanning of barcodes on mail pieces. More generally, USPS officials said that USPS is working toward tracking mailings from acceptance (which they said will depend on mailers providing accurate data) through USPS’s mail processing and transportation networks. Such information is a step toward additional delivery performance measurement. In the interim, however, major gaps remain in USPS’s ability to track most types of mail. Limited mailer participation. Mailer participation is low in applying unique barcodes to mail pieces for tracking purposes, which means that the tracking data cannot be considered representative of overall performance. Using USPS’s Confirm Service, mailers can apply unique barcodes to Standard Mail, First-Class Mail, and Periodicals, when the mail is letter or flat-sized and can be sorted on USPS automation equipment. Although these types of mail constitute most of the total mail volume, less than 2 percent of total mail volume is tracked by the Confirm program. Participation in Confirm is limited, in part because its use is voluntary, mailers must pay a fee to participate, and mailers also incur additional expenses related to their participation, such as for mail preparation. Although USPS officials expect mailer participation to increase as improved technology is implemented, they expect participation to continue to be unrepresentative, with some mailers more likely to participate than others. They explained that Confirm will continue to be of greatest interest to large mailers with well-developed capabilities to use tracking data. These mailers include large companies that track bills and remittance mail and large advertisers that track mailed catalogs in order to efficiently schedule staff and inventory. Another factor in low participation is the mailers’ continuing use of non- USPS delivery performance measurements that they have established or paid third parties to do so, such as “seeding” their mailings with mail sent to persons who report when it is received. As long as a nonrandom group of mailers participates in Confirm—which is likely to be the case for the foreseeable future—the aggregate results will not be representative as a measure of overall systemwide performance. Thus, the main options for obtaining representative results for any given type of mail (such as bulk First-Class Mail) would appear to be (1) obtaining sufficient participation by all mailers who send that type of mail or (2) obtaining information on mail that is sent by a representative sample of mailers. For either option, USPS, mailer groups, and mailers would need to collaborate to achieve the level of mailer participation necessary to generate representative performance data that could be useful to all parties. Data quality. According to USPS, data quality deficiencies have been another problem in measuring delivery performance, because USPS has no way to determine when it receives bulk mail, such as Standard Mail and Periodicals, which is commonly referred to as obtaining a valid “start the clock” time. At present, USPS relies on mailer-provided information submitted with each mailing, which USPS officials told us does not always include accurate information on when and where the mail was submitted. Based on their experience, USPS officials do not consider mailer-provided information to be sufficiently accurate for measuring delivery performance. The issue of inaccurate data has persisted for years despite repeated efforts by working groups composed of USPS and mailer representatives. In this regard, USPS officials told us that resolving this issue would likely entail additional costs for mailers, which they said mailers have not been willing to pay; however, some mailers disagree with this view. On the positive side, the USPS Senior Vice President for Intelligent Mail and Address Quality told us that USPS has initiatives under way that should help ameliorate data quality deficiencies. Costs. Senior USPS officials told us that currently, it would be too costly for USPS to create new representative performance measures for any major type of mail. They said that given current technology, USPS would incur substantial costs to implement delivery performance measurement for all major types of mail if USPS were to use bar codes to track every mail piece from when it enters the postal system to when it is delivered. A senior USPS official told us that delivery performance measurement for all mail—which would have involved tracking more than 210 billion pieces of mail in fiscal year 2005—would cost hundreds of millions of dollars and expressed doubt that mailers would want to pay those additional costs even in return for performance data. In this regard, sampling approaches could be used to obtain representative data on delivery performance that would likely be much less costly than seeking to measure delivery performance for every piece of mail. A related cost issue is how USPS would recover the associated measurement costs from mailers and the impact of this decision on mailer participation that would be needed for USPS to measure delivery performance. As the Confirm program illustrates, a fee-based program creates a disincentive for mailers to participate. In contrast, USPS chose to build its tracking costs into the rate base for Parcel Select, so that the costs would be shared by all Parcel Select mailers. USPS officials told us they had rejected this approach for other types of mail for several reasons, including the uncertain benefits to USPS and mailers’ preference for lower rates, particularly for mailers who would not wish to pay the costs associated with collecting delivery performance data. However, some major mailer groups disagree with USPS’s perspectives of mailer willingness to cover costs as a key impediment to implementing representative measures of delivery performance for all major types of mail. The Mailers Council, a coalition of over 50 major mailing associations, corporations, and nonprofit organizations, told us that its members would be willing to pay additional USPS costs, within reason, for delivery performance measurement, stating that such costs would be small compared to total postal costs. Until USPS commits to developing additional representative measures of delivery performance for all major types of mail and considers various approaches for measuring the delivery performance of its major types of mail, discusses their usefulness and feasibility with mailers, and estimates the associated costs, it will be difficult to get beyond USPS’s assertion that measurement is cost- prohibitive and mailers’ assertions that the costs could be relatively low and that they would be willing to bear them. Although USPS plans to improve its service performance, it has no current plans to implement additional representative measures of delivery performance. USPS states in its latest Strategic Transformation Plan that it plans to improve the quality of postal services by continuing to focus on the end-to-end service performance of all mail. Further, it states that “customers expect timely, reliable mail service, and the Postal Service has delivered. Under the 2002 Transformation Plan, the Postal Service successfully improved service performance across all product lines.” We acknowledge and agree with USPS’s emphasis on improved service performance. However, we do not know whether service has improved across all product lines, nor does USPS, because as we noted earlier, USPS does not collect or provide representative delivery performance information that would be needed to support this statement. USPS has information from various operational data systems, but this information does not amount to delivery performance measurement. Gaps in delivery performance measurement information are hindering USPS and mailers in identifying opportunities to improve service across all product lines, as well as effectively addressing these opportunities by understanding whether problems are specific to a particular mailer or systemic problems in USPS’s mail processing and transportation networks. Without complete delivery performance information that is regularly reported, stakeholders must rely on the publicly available information that USPS chooses to provide, which often highlights only positive results. For example, in discussing its strategy for providing timely, reliable end-to-end delivery service, the Strategic Transformation Plan states “customer satisfaction scores have never been higher.” Although customer satisfaction information is valuable and useful to USPS and other organizations that provide products and services, it does not measure delivery performance. USPS’s currently available delivery performance information does not provide sufficient context to determine (1) actual delivery performance results for all of its product lines, (2) how performance is changing over time through the assessment of trend information, and (3) whether USPS’s delivery performance is competitive. Timeliness is a critical factor in today’s competitive business environment, where many companies operate with just-in-time inventories and rely on timely delivery to meet their needs. It is likely to become even more important in the future. Thus, reliable delivery performance information reported in a timely manner is critical for high-performing organizations to be successful in this environment. USPS’s Strategic Transformation Plan discusses strategies for providing timely, reliable mail delivery, which include plans to improve the quantity and accuracy of service performance information collected through passive scanning and improved start-the-clock information, provide customers with information about their own mailings, and create better diagnostic data so that bottlenecks can be eliminated throughout the system. These are all positive steps needed to improve delivery performance information. However, the Plan falls short of committing to developing end-to-end delivery performance information that could be used to measure how well USPS is achieving its strategy of improving service performance across all product lines. Further, the Plan does not discuss what delivery performance information USPS plans to report publicly. Pending legislation does address what delivery performance information Congress would like to see USPS report in the future. However, USPS could demonstrate that it wants to provide leadership in this area by not waiting for the legislation to be enacted. Instead, USPS could clearly commit to developing representative end-to-end delivery performance measures for all of its product lines. USPS could also take the lead in collaborating with mailers to implement such performance measures. As we previously stated, effective collaboration with mailers is needed to resolve the impediments that hinder progress in this area, such as data quality issues involving how to improve the accuracy of start-the-clock information. Concerns about cost could be addressed by exploring options such as sampling in collaboration with the mailers to determine how best to measure delivery performance at much less cost than attempting to track every mail piece. Such collaboration would also allow the parties to determine their information needs, explore cost trade-offs associated with various options, and resolve associated data quality issues. In its letter to us, PostCom noted that delivery performance measurement could be implemented in many ways that would not be costly. PostCom said that measurement costs could be affected by multiple factors, such as whether all mail pieces or a sample are tracked; whether tracking is to the point of delivery vs. the last automated scan plus a “predicted” time for delivery; whether data is collected automatically by equipment in a passive scan vs. other methods requiring USPS employees to scan mail; and whether USPS technology developments will be used exclusively to measure performance or primarily for processing the mail. We recognize that it will take time to resolve impediments to implement additional delivery performance measures. However, USPS’s leadership, commitment, and effective collaboration with mailers are critical elements to implementing a complete set of delivery performance measures that will enable USPS and its customers to understand the quality of delivery services, identify opportunities for improvement, and track progress in achieving timely delivery. USPS delivery standards are not as useful and transparent as they should be. Standards for key types of mail—including Standard Mail, USPS’s main growth product—are largely static, and do not fully reflect current operations. Thus, they cannot be used to set realistic expectations for mail delivery, to establish benchmarks for measuring performance, or to hold individuals accountable through pay-for-performance incentives tied to measurable results. USPS’s delivery performance measurement and reporting is not complete, because it does not cover key types of mail— including Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services. Further, despite recent disclosures on its Web site for some types of mail, USPS’s reporting remains limited and has fallen short of statutory requirements to include specified delivery performance information. Because of gaps in delivery performance measurement and reporting, stakeholders, including the Congress, cannot understand how well USPS is fulfilling its basic mission, nor can they understand delivery performance results and trends. As a result, USPS and mailers are hindered in identifying and diagnosing delivery problems so that corrective action can be implemented. This situation increases the financial risk to USPS, which faces increasing competition. If mailers are not satisfied with USPS’s delivery service, they could take their business elsewhere. Prospects for progress continue to be uncertain, in part because USPS has not committed itself to modernizing its delivery standards or developing representative performance measures for all major types of mail. USPS management commitment and more effective collaboration with mailers will be critical for resolving impediments to delivery performance measurement and reporting. Give-and-take by both parties will be required to achieve consensus on designing measurement systems that meet different information needs, increasing mailer participation in providing information needed to facilitate performance measurement, addressing data deficiencies, finding ways to cover the associated costs, and overcoming impediments. To facilitate greater progress in developing complete delivery performance information, we recommend that the Postmaster General take the following four actions: 1. modernize delivery standards for all major types of mail so that they reflect USPS operations and can be used as benchmarks for understanding and measuring delivery performance; 2. provide a clear commitment in USPS’s Comprehensive Statement on Postal Operations to develop a complete set of delivery performance measures for each major type of mail that is representative of overall delivery performance; 3. implement representative delivery performance measures for all major types of mail by providing more effective collaboration with mailers and others to ensure effective working relationships, follow-through, accountability, and results; and 4. improve the transparency of delivery performance standards, measures, and results by publicly disclosing more information, including in its Comprehensive Statement on Postal Operations and other annual performance reports to Congress, as well as providing easily accessible information on its Web site. USPS provided comments on a draft of this report in a letter from the Postmaster General dated July 14, 2006. These comments are summarized below and included as appendix III. In addition, the Postmaster General provided oral comments in a meeting on June 26, 2006, with suggestions for further clarifying information, which were incorporated where appropriate. USPS’s letter recognized that its delivery performance measurement and reporting are not complete and provided detailed information about its ongoing and planned efforts to ultimately measure service performance and provide transparency for all classes of mail. USPS stated that it intends to lead the efforts required to reach this goal by working collaboratively with others in the mailing industry. USPS’s letter further stated that ultimately, “the core issue is service—and according to all indicators, we are succeeding in our goal of continuous service improvement. We are not satisfied with maintaining the status quo.” USPS stated that although it recognizes the desire for aggregate service performance results for all mail categories, it believes that it serves mailers best by focusing first on providing service measurement and diagnostics to individual customers, then looking to provide aggregate results. Regarding the draft report’s findings related to service standards, USPS disagreed that some of its delivery standards are outdated and stated that its service standards are modern and up-to-date. USPS did not directly comment on three of our four recommendations. On our fourth recommendation concerning improving the transparency of delivery performance standards, measures, and results, USPS commented that its service standards should be more visible and stated that it is exploring making information related to its service standards available through additional channels, including its Web site. We are encouraged by USPS’s commitment to ultimately measure service performance and provide transparency for all classes of mail and its intention to take the lead in working with mailers to achieve this goal. Further, we recognize in our report USPS’s ongoing efforts to implement technology that will track mail throughout USPS’s mail processing system, which is a step toward improved delivery performance measurement. We also agree, as we noted in our report, that mailer participation is necessary to generate representative delivery performance measures for all mail categories. USPS’s letter details many ongoing and planned efforts necessary to improve performance measurement, as well as specific actions that USPS calls on mailers to take to enable its vision of measurement. We agree with USPS’s emphasis on improving service, but we continue to have questions about whether USPS’s efforts will result in representative delivery performance measures for all major types of mail. For most major types of mail, USPS’s vision of service performance measurement is generally limited to tracking mail through its mail processing and transportation networks, which is not the same as measuring end-to-end delivery performance against USPS delivery standards. Considering USPS’s lack of commitment to implementing a complete set of delivery performance measures, as well as the lack of timeframes in USPS’s letter, we also have questions about how long it will take to achieve this goal. We recognize that it will take time to implement many of the ongoing and planned initiatives described in USPS’s letter. Thus, USPS’s sustained leadership is critical to ensure that effective collaboration with mailers takes place so that USPS implements and reports on representative delivery performance measures for all major types of mail. We also believe that USPS should establish specific timeframes so that timely progress can be made in this area. USPS’s letter states that it will first provide individual mailers with delivery information before working to provide aggregate delivery performance information, stating that aggregate information on average performance may be irrelevant to mailers. We do not believe that these are mutually exclusive goals that have to be addressed sequentially, because both aggregate and individual performance information have benefits that would meet varying needs of different postal stakeholders. We recognize and agree that mailers want to have performance information related to their own mailings to determine the status of their mail as it moves through USPS’s system. However, appropriate aggregate information is needed to put mailer-specific information into context so that USPS and mailers can understand whether any delivery problems that occur are specific to particular mailers or reflect systemic issues within USPS’s processing and transportation networks. Appropriate aggregate information may need to be more specific than the average performance for a general type of mail, so that comparisons can take geographic and other variations in performance into account and thereby provide useful diagnostic information to USPS and mailers. USPS has recognized this principle in its EXFC measure of First-Class Mail deposited into collection boxes, which provides aggregate data that can be broken down by geographic area, delivery standard (e.g., results for 1-day, 2-day, and 3-day mail), and other subgroups of this mail. Moreover, USPS’s diagnostic data is not representative and does not amount to delivery performance measurement. USPS’s letter does not fully recognize the critical importance of aggregate delivery performance measurement for accountability purposes, by parties both inside and outside USPS. As USPS’s letter demonstrates, where USPS has delivery performance measures, it can report on how well it is achieving one of its primary goals to improve delivery services. However, USPS is not in a position to make such assessments for more than four-fifths of its mail volume, because it does not measure and report its delivery performance for most types of mail. USPS’s letter also states that “we share the mutual goal of complete network transparency to provide mailers with a comprehensive view of the service they receive.” Our view of transparency is broader than providing mailers with data on their own mail. As a federal government entity with a monopoly on some delivery services, USPS is accountable to the American public, Congress, PRC, USPS’s Board of Governors, and postal customers for the delivery services it provides. However, as noted earlier, stakeholders cannot understand how well USPS is fulfilling its basic mission due to gaps in delivery performance measurement and reporting, nor can they understand delivery performance results and trends. USPS’s letter does not address what actions USPS plans to take to improve the transparency of publicly available delivery performance information. Without sufficient transparency, oversight and accountability are limited. We disagree with USPS’s comments that its service standards are modern and up-to-date. Consistent with the input we received from numerous mailers, we believe that these standards do not work for the mailers and for USPS. As we noted in our report, some of USPS’s delivery standards, including those for Standard Mail, some Periodicals and most Package Services, do not reflect changes in how mail is prepared and delivered. These standards are unsuitable as benchmarks for setting realistic expectations for timely mail delivery, for measuring delivery performance, or improving service, oversight, and accountability. Specific comments in the USPS letter were organized into the following six sections: (1) “Focus on Service,” (2) “Service Performance Results,” (3) “Some Areas of Concern,” (4) “Modern Service Standards,” (5) “Measurement Systems and Diagnostic Tools,” and (6) “Customer Collaboration and Reporting.” These comments are summarized below with our analysis. Focus on Service: USPS commented that one of its primary goals in its Strategic Transformation Plan 2006-2010, is to improve service. USPS said this goal is supported by strategies that include a “balanced scorecard” that uses service performance metrics to support personal and unit accountability. Goals for these metrics, which include delivery performance measures as well as operational indicators that USPS said are critical to on-time service performance, are incorporated into USPS’s pay- for-performance incentives for its managers. We agree with USPS’s focus on improving service and holding its managers accountable for results. Our draft report noted that USPS had recognized the importance of the timely delivery of mail and integrated performance targets and results for some types of mail into its performance management system. However, USPS has not yet achieved its aim of a “balanced scorecard” for delivery performance because its delivery performance measures cover less than one-fifth of mail volume, and these measures do not cover Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services mail. This gap impedes USPS’s potential for holding its managers accountable for delivery performance of all types of mail and for balancing increasing financial pressures with the need to maintain quality delivery service. Service Performance Results: USPS stated that its focus on service has resulted in “record performance across all mail categories,” adding that its measurement systems for First-Class Mail, Priority Mail, and Express Mail show that USPS had met or exceeded the performance targets it set for them. However, we do not know whether service has improved across all mail categories, nor does USPS, because as we noted earlier, USPS does not collect or provide representative delivery performance information that would be needed to support this statement. Further, in fiscal year 2005, USPS did not achieve record delivery performance for all categories of mail that it measured, and did not meet all of the delivery performance targets it had set. For example, the 2005 Annual Performance Report included within the 2005 Statement on Comprehensive Operations reported that on-time performance for First-Class Mail with a 3-day delivery standard, as measured by EXFC, was 87 percent in fiscal year 2005, down 2 percentage points from the previous fiscal year and falling short of USPS’s goal of 90 percent. On-time delivery scores for Priority Mail also declined over the same period. With respect to reporting on its delivery performance, USPS commented in its letter that it has posted delivery performance results on its Web site, including for some of its competitive products. As our draft report stated, USPS improved its reporting of delivery performance results by starting to post information on its Web site in April 2006, including selected results for the past quarter for the timely delivery of some Express Mail, Priority Mail, First-Class Mail, and Package Services. We stated that USPS’s recent disclosures are a good step toward providing easily accessible information on delivery performance results on its Web site for key types of mail used by the public. However, we also found that the information is incomplete because it does not include delivery performance results for all major types of mail. Some major types of mail are not measured, while the information on the Web site provided limited information for mail that is measured, and did not fully disclose the limited scope of this measurement. We continue to believe that without more complete reporting of delivery performance information, Congress and the American public do not have adequate information to determine how well USPS is accomplishing its mission of providing prompt and reliable delivery services. Some Areas of Concern: USPS stated that our draft report did not fully consider some important issues related to performance measurement. USPS commented that although our draft report did discuss data quality issues, it had not accounted for some relevant factors, including the completeness, accuracy, and validity of mailer information submitted when mail is entered. However, our draft report included a discussion of the major impediments that have contributed to USPS’s slow progress toward implementing delivery performance measures for all major types of mail, including impediments relating to the quality of mailer information submitted when mail is accepted into USPS’s system, which is needed for “start the clock” delivery information. Our draft report provided USPS’s view that mailers do not provide accurate information on its mailings that would be needed to “start the clock” for delivery performance measurement and noted that this issue has been persistent despite repeated efforts by USPS-mailer committees. In discussing measurement issues, USPS further commented that the mailing industry must embrace changes such as improved address quality and increased presort accuracy. We believe that although these outcomes would facilitate USPS handing of mail, this should not be a reason to delay measurement of delivery performance. Other federal entities routinely set performance goals and measure results for important activities that are partly outside their control, and use the results to work with their partners to improve their performance. On another matter, USPS stated that our report’s discussion of USPS attempts to measure performance did not account for complexities unique to Standard Mail and Periodicals. USPS also stated that its experience has demonstrated that it is particularly difficult to design a broad and effective measurement system for Standard Mail and Periodicals, explaining that its previous attempts were unsuccessful for reasons including lack of information on the acceptance of this mail into USPS’s system and complexities relating to different types of mail preparation and entry. We disagree that our draft report did not adequately account for these complexities and believe USPS can address these complexities to successfully implement delivery performance measures for Standard Mail and Periodicals. As noted above, our draft report discussed issues in obtaining information needed to “start the clock” on delivery performance measurement. We also recognized that Standard Mail and Periodicals have complexities in mail preparation and entry that USPS should incorporate into its delivery performance standards so that they can serve as suitable benchmarks for measurement. Further, our draft report provided a detailed discussion of attempts to measure performance by task forces and working groups comprised of USPS and mailer representatives, who were well versed in the complexities of Standard Mail and Periodicals. These groups repeatedly recommended that USPS measure the delivery performance of Standard Mail and Periodicals, including the 1997 recommendations of the Blue Ribbon Panel and the 1999 recommendations of a follow-up USPS/mailer working group that were made years after USPS’s short-lived attempt to measure delivery performance of Standard Mail and Periodicals. The 1999 recommendations stated that USPS should implement performance measurement for Standard Mail, Periodicals, and other classes of mail in a manner that would provide aggregate performance data with breakdowns according to delivery standards, which for bulk mail such as Standard Mail and Periodicals would reflect how the mail is prepared and the type of postal facility where it enters USPS’s system. The working group asked USPS to begin working on implementing these recommendations immediately. As we concluded, gaps in performance measurement mean that stakeholders cannot understand how well USPS is fulfilling its basic mission, nor can they understand results and trends—a situation that also increases the financial risk to USPS, which faces increasing competition. Modern Service Standards: USPS stated that our draft report did not fully acknowledge its long history of establishing and revising delivery standards. We disagree because our report provides a detailed history of delivery standards, noting that USPS has updated its standards for some mail, such as First-Class Mail and Parcel Select. Our draft report also stated that delivery standards are outdated for several types of mail, including Standard Mail, some Periodicals, and most Package Services, because they have not been updated in many years to reflect significant changes in the way mail is prepared and delivered. In addition, USPS commented that the concept of modernized delivery standards may, for some, denote upgrading service levels, warning that upgrading service would result in increased costs and prices. However, our draft report does not discuss whether service needs to be upgraded and focuses instead on the need for USPS delivery standards to reflect current USPS operations including presorting and destination entry. Measurement Systems and Diagnostic Tools: USPS commented that the description of USPS performance measurement systems in our draft report was incomplete and unintentionally misleading. USPS commented that the draft report overlooked “the fact” that EXFC, which measures First-Class Mail deposited into collection boxes, is reflective of delivery performance for all First-Class Mail including bulk First-Class Mail. USPS stated that bulk First-Class Mail is handled in the same manner as collection box mail. USPS’s comment about EXFC is contradicted by years of USPS reporting, including in its annual Comprehensive Statement on Postal Operations and its quarterly press releases, that “EXFC is not a systemwide measure of all First-Class Mail performance.” USPS has repeatedly used this statement in response to a recommendation made in a report issued in 2000 by the USPS Office of Inspector General, which also found that EXFC does not consider the delivery performance of bulk First- Class Mail. Customer Collaboration and Reporting: USPS commented that many of its service measurement systems and diagnostic tools were designed jointly or in collaboration with its customers. Our draft report discusses USPS’s many collaborative efforts with mailers, but, as noted previously, our concern is that USPS has not implemented key recommendations that have been made since the early 1990s by numerous USPS/mailer committees. Further, our work found that the lack of adequate and continued management commitment and effective collaboration with the mailing industry to follow through on recommendations for improvements and to resolve issues is an overall theme in understanding the slow progress being made in developing and implementing methods of measuring delivery performance. Thus, while we are encouraged that USPS presented several initiatives to develop the ability to track mail through its mail processing and transportation networks, as outlined in our report and our analysis of USPS’s comment letter, we continue to believe that there needs to be greater progress in implementing representative measures of end-to-end delivery performance. We are sending copies of this report to the Ranking Minority Member of the Senate Committee on Homeland Security and Governmental Affairs, the Chairman and Ranking Minority Member of the House Committee on Government Reform, Rep. John M. McHugh, Rep. Danny K. Davis, the Chairman of the USPS Board of Governors, the Postmaster General, the Chairman of the Postal Rate Commission, the USPS Inspector General, and other interested parties. We also will provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at [email protected] or by telephone at (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to assess (1) the delivery standards for the timely delivery of mail that the U.S. Postal Service (USPS) has established, (2) the delivery performance information on the timely delivery of mail that USPS measures and reports, and (3) the progress USPS has made in improving its delivery performance information. We based our assessment of USPS’s delivery standards, measures, and reporting using the concepts of completeness, transparency, and usefulness of delivery standards, measures, and reporting (see table 6). We identified applicable laws related to USPS’s mission, ratemaking, and reporting; statutes and practices used by high-performing organizations related to delivery standards, measurement, and reporting, including practices identified through our past work. The basis of our assessment is described in greater detail in table 6. To address the first objective, assessing delivery standards USPS has established, we obtained information from USPS on its delivery standards for the timely delivery of mail. Information consisted of USPS’s narrative description of its standards; documentation of its standards included in the Domestic Mail Manual and related policies included in the Postal Operations Manual; and written responses provided to us by USPS. We also obtained material on delivery standards that USPS provided in Postal Rate Commission (PRC) proceedings and that were posted to the PRC Web site. These proceedings included postal rate cases and “nature of service” proceedings that considered the USPS proposals expected to have an effect on the nature of postal services on a nationwide or substantially nationwide basis. We reviewed publicly available material that USPS reported on its delivery standards, which was posted on the USPS Web site, including the section of the USPS Web site devoted to the Mailers’ Technical Advisory Committee (MTAC). Our assessment of USPS’s delivery standards was also informed by the views of mailing organizations, mailers, PRC, and PRC’s Office of the Consumer Advocate (OCA), which is charged with representing the interests of the general public and the views of other postal stakeholders. Some of these views were provided in written material issued by the stakeholders, including material provided directly to us, material provided in PRC proceedings, and articles in the trade press. Other views were provided to us in interviews we conducted with these organizations. To address the second objective, delivery performance information USPS measures and reports, we obtained documentation and related written material on USPS’s delivery performance measurement systems, which included the External First-Class Measurement System (EXFC), the Product Tracking System (PTS), the now-discontinued Priority End-to-End System (PETE), and other measurement systems for international mail. We obtained documentation on the data collection procedures and internal controls for these systems and obtained detailed explanations of these systems in interviews with USPS officials. In addition, we obtained publicly available information on these systems from USPS reports, material that USPS provided PRC in past rate cases, and published articles about these systems. We conducted a limited data reliability assessment of EXFC, PTS, and PETE. Our assessment was informed by obtaining the views of USPS officials, mailing groups, mailers, and other stakeholders, both in writing and in interviews. To address the third objective, assessing the progress USPS has made in improving its delivery performance information, we obtained information from a variety of sources on the progress USPS has made and its opportunities for improving delivery performance information. We obtained information on the history of studies that recommended USPS improve its delivery standards, measurement, and/or reporting. These studies included joint USPS-mailer committees, some of which were ad hoc efforts and some of which were sponsored by MTAC. Information on these studies included written reports by the committees, documentation on these groups provided to us by USPS and mailers, and interviews of USPS, mailer committees, and mailers. More generally, we obtained the views of USPS officials, mailing groups, mailers, and other stakeholders on USPS’s progress and remaining opportunities in this area, both in writing and in interviews. We requested comments on a draft of this report from USPS; these are reproduced in appendix III. We conducted our review from August 2005 to July 2006 in accordance with generally accepted government auditing standards. Appendix II: USPS Delivery Standards Explanation of delivery standards and available information These standards have not been systemically changed since their inception in the 1970s. As an “approximate overview,” the number of days is loosely based on the number of postal zones that mail must travel, which in turn are loosely based on a mileage radius to the destinating Sectional Center Facility (SCF). Usually, 3 days for mail within the same SCF, depending on the size of the Intra-SCF area. All other non-Intra-SCF destinations are 4 days or greater. While the 3- to 10- day range outlines the official USPS standards, USPS sometimes does have independent “programs,” or “guidelines,” outside of the Service Standards, which attempt to facilitate the delivery of Standard Mail (sometimes directly in concert with mailers). In some cases, these time frames are more ambitious or differ from the official Service Standards. For example, the Postal Operations Manual (POM) specifies that some Standard Mail is to be delivered 2 delivery days after it is entered into the postal system. This applies to mailer-prepared carrier-route presort mail that mailers dropship to delivery units (including post offices, branches, and stations) where letter carriers pick up their mail for delivery. delivery units should make every effort to adhere to mailer-requested, in-home delivery dates. Mail should not be delivered earlier than the date the mailer has requested. If delivery units receive Standard Mail with a mailer-requested delivery date later than the USPS-scheduled delivery day, the USPS-scheduled date should be changed to match the last requested in-home delivery date, to comply with the mailer’s request. If delivery units receive Standard Mail with a mailer-requested delivery that has already been passed, the decision regarding delivery or disposition of this mail (including disposal without delivery) must be consistent with the current national policy on this subject. If Standard Mail is mixed with a higher class of mail (e.g., First-Class Mail) in USPS’s mail processing system in such a manner as it loses its identity, it must be considered upgraded and treated as the higher class of mail. Technically, such commingled items do not become the higher mail class. However, USPS enacts this policy in order to not slow down the ultimate delivery of such pieces by not requiring that they be re-isolated and “extracted” from the higher mail class and subsequently re-entered with their “correct” mail class, a process which could possibly slow down delivery and provide worse service than was originally intended (although the re-segregation of such commingled mail, by mail class, is always an option, if operationally feasible). There are no prohibitions against making USPS management agreements below the national level, which accelerate the delivery expectations for any Standard Mail versus national policy. Delivery standards are 3-digit-to-3-digit ZIP Code based. Periodicals mail is a “preferential” product that travels normally by surface to all valid ZIP Codes. The standard range of 1 to 7 days is loosely equivalent to the eight Postal Zones (which are also based on a Mileage Radius), minus 1, as shown in Table 8. In accordance with policies adopted in 1990 after the conclusion of a PRC proceeding that began in 1989, the 1-day delivery area should normally be adjusted to be the same as the overnight area for First-Class Mail, with exceptions subject to regional and headquarters concurrence. 2 to 3 day standards can be as fast as First-Class Mail but are not usually intended to be faster. Nearly all of the Service Standard pairs meet this “Mail Class Hierarchy” guideline. The concept for these standards has not changed since the 1980s. Newly activated ZIP Codes (or ZIP Code areas that have been revised due to an Area Mail Processing Plan implementation) are “cloned” to have the same Periodical delivery standards as the other originating or destinating ZIPs served out of the same processing plant. 2 to 9 days to most ZIP Codes 2 to 9 days to all valid ZIP Codes within the contiguous 48 states. There are no established Package Services delivery standards to Alaska, Hawaii, or offshore destinations (e.g., Guam, Puerto Rico, Virgin Islands). The delivery standards are 3-digit-to-3-digit ZIP Code based. Package Services mail is a product that travels normally by surface to all ZIP Codes. The standards are therefore predicated on the Bulk Mail Center (BMC) network. Normally, the standards would change only if the Area Mail Processing (AMP) Plan resulted in the origin or destination ZIP Code moving to within a new BMC area because the gaining facility was located in a different BMC area than the previous facility. The concept for Package Services service standards has remained constant since the 1970s. Newly activated ZIP Codes (or revised ZIP Codes areas due to an AMP Plan implementation) are “cloned” to have the same Package Services service standards as the other originating or destinating ZIPs served out of the same BMC or Auxillary Service Facility. Parcel Select comprises Parcel Post items that are mailed in bulk quantities; are entered by mailers at USPS facilities, including Destination Bulk Mail Centers (DBMCs), Destination Sectional Center Facilities (DSCFs), or Destination Delivery Units (DDUs); and meet other rules for mail preparation and entry. The delivery standards include: 1 day for DDU entry by 4 p.m. 2 days for DSCF entry by 3 p.m. 2 to 3 days (generally 2 days) for DBMC entry by 3 p.m. 2-day versus 3-day for DBMC entry is based on the Parcel Post standard for the 3-digit ZIP where the DBMC is physically located and the destination 3-digit ZIP of the parcel. These standards were determined as part of the Parcel Select product creation. Originally, all BMC entry was 3-day. This change to most 2-day was made in 2002. Explanation of delivery standards and available information Delivery standards have existed for Priority Mail since its inception, when it essentially replaced Air Mail in the late 1970s. The standards currently range from 1 to 3 days to all valid ZIP Codes. However, Priority Mail is primarily a product that is targeted for delivery within 2 days. (Over 93 percent of Priority ZIP Code pairs currently have either a 1-day or 2-day standard.) These standards are determined on a case-by-case basis, depending on processing times and available transportation. Priority Mail service standards are usually equal to, or faster than, First-Class Mail standards to/from the same domestic ZIP Code pairs. Newly activated ZIP Codes (or revised ZIP Codes areas due to an Area Mail Processing Plan implementation) are cloned to have the same Priority Service standards as the other originating or destinating ZIPs served out of the same processing plant. First-Class Mail other than Priority Mail: 1 to 3 days, depending on the 3-digit ZIP Code of acceptance and the destination address. Standards do not vary by shape, size, or weight. The same standard applies to all mail originating or destinating in the same 3-digit ZIP Code area. USPS policies for First-Class Mail Service Standards are as follows: 1-day (Overnight) Delivery Standard: Overnight delivery standards must include all of the intra-SCF area. Other areas may be considered for overnight delivery if significant business/mail volume relationships exist and they are within the reasonable reach of surface transportation. 2-Day Delivery Standard: Two-day delivery standards must include all areas that currently have an overnight standard but will not, as proposed, be in the new overnight area. Two- day delivery standards must also include all SCFs with the home state and nearby states that are within the reasonable reach of surface transportation (as defined by the USPS Office of Transportation and International Services). In addition, 2-day delivery standards may include other 3-digit areas outside of the reach of surface transportation if significant business/mail volume relationships exist and if dependable and timely air transportation is available. 3-Day Delivery Standard: Three-day delivery standards should include all remaining destinations. Service standard changes reflecting the new overnight definition were implemented in 1990 to 1992. In 2000 to 2001, in order to increase the 2-day reach but make it achievable at a consistently appropriate level, USPS expanded the 2-day reach to include non- overnight offices that were as far away as a 12-hour drive from the originating “parent” Processing and Distribution Center (P&DC) to the destinating Area Distribution Center (ADC) via surface transportation. At the same time, the USPS determined that the existing commercial air transportation network had deteriorated and had become too unreliable for maintaining the 2-day service standard for First-Class Mail beyond the reasonable reach of surface transportation. Accordingly, USPS changed the service standards for this mail from 2 days to 3 days. Although this deterioration and resulting unreliability of commercial air service made it infeasible for USPS to continue to apply the 2-day standard to destinations beyond the reasonable reach of surface transportation, the overall number of origin-destination pairs with 2-day standards increased in 2000-01 because of the adoption of the 12-hour drive time definition. Explanation of delivery standards and available information Overnight and second-day service to designated areas and post offices, supported by a money-back guarantee. Next-day Service provides overnight service to designated 3-digit and 5-digit ZIP Code delivery areas, facilities, or locations, based on the time of acceptance and available service-response air and surface transportation. Second-day Service is offered for areas not on the next-day network, including any 3- digit or 5-digit ZIP Code destination not listed in the Express Mail Next Day Service directory, but may not be available at or between all post offices or at all times of deposit. Second Delivery Day is not a distinct service but applies to mailings to those ZIP Codes where postal delivery is not provided on Sundays or federal holidays, and delivery is guaranteed on the next regular delivery day. This typically applies only to mailings made on Friday to a destination that lacks Sunday/holiday delivery. In that case, the piece is guaranteed for delivery on the next regular delivery day, which is a Monday, or Tuesday if Monday is a federal holiday. Unlike most other types of mail, Express Mail service may involve delivery on Sundays. At the point of sale, each customer is notified of the specific service standard for the mailed item. This standard is based on information in an electronic and/or hardcopy directory containing detailed information about local and destination ZIP Code acceptance and delivery capabilities. The clerk who accepts the mail annotates the customer receipt to indicate whether the mailed item was accepted for next- or second-day delivery. Further, customers can obtain the guaranteed delivery commitments for some individual pieces of mail through the USPS Web site by entering their originating and destinating 5- digit ZIP Codes. USPS and its overseas delivery partners establish delivery standards in conjunction with international organizations including the Universal Postal Union and the International Post Corporation. Global Express Mail Guaranteed: 2 to 3 days with date-certain shipping to over 200 countries. Global Express Mail: 3 to 5 days to over 190 countries with date-certain shipping to selected countries. Global Priority Mail includes single-piece mail under 4 pounds sent from the United States to over 50 countries. International Priority Air Mail includes mailings of items under 4 pounds, virtually worldwide, sent in bulk quantities at lower rates than Global Priority Mail. Global Air Mail letters: 4 to 7 days, including 5 days to Europe; 4 days to Canada; and 1 to 3 days for transit within the United States. Global Economy Mail letters: 4 to 6 weeks. Global Air Mail parcels: 4 to 10 days to virtually all countries. Global Economy Mail parcels: 4 to 6 weeks. Prepaid business reply postcards and letters to virtually all countries. The number of delivery days after acceptance of the mail, which generally does not include Sundays or holidays. http://www.usps.com/global/sendpackages.htm and http://www.usps.com/global/sendmail.htm. In addition to the individual named above, Teresa Anderson, Cynthia Daffron, Tamera L. Dorland, Kathy Gilhooly, Brandon Haller, Kenneth E. John, Catherine S. Kim, Karen O’Conor, Jacqueline M. Nowicki, and Edda Emmanuelli-Perez made key contributions to this report.
|
U.S. Postal Service (USPS) delivery performance standards and results, which are central to its mission of providing universal postal service, have been a long-standing concern for mailers and Congress. Standards are essential to set realistic expectations for delivery performance and organize activities accordingly. Timely and reliable reporting of results is essential for management, over-sight, and accountability purposes. GAO was asked to assess (1) USPS's delivery performance standards for timely mail delivery, (2) delivery performance information that USPS collects and reports on timely mail delivery, and (3) progress made to improve delivery performance information. USPS has delivery standards for its major types of mail, but some have not been updated in a number of years to reflect changes in how mail is prepared and delivered. These outdated standards are unsuitable as benchmarks for setting realistic expectations for timely mail delivery, measuring delivery performance, or improving service, oversight, and accountability. USPS plans corrective action to update some standards. Also, some delivery standards are not easily accessible, which impedes mailers from obtaining information to make informed decisions. USPS does not measure and report its delivery performance for most types of mail. Therefore, transparency with regard to its overall performance in timely mail delivery is limited. Representative measures cover less than one-fifth of mail volume and do not include Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services. Despite recent disclosures on its Web site, USPS's reporting is more limited than the scope of measurement. Without sufficient transparency, it is difficult for USPS and its customers to identify and address delivery problems, and for Congress, the Postal Rate Commission, and others to hold management accountable for results and conduct independent oversight. Progress to improve delivery performance information has been slow and inadequate despite numerous USPS and mailer efforts. Some impediments to progress include USPS's lack of continued management commitment and follow through on recommendations made by joint USPS/mailer committees, as well as technology limitations, data quality deficiencies, limited mailer participation in providing needed performance data, and costs. Although USPS has initiatives to improve service and better track mail through its mail processing system, USPS has no current plans to implement and report on additional representative measures of delivery performance. USPS's leadership and effective collaboration with mailers is critical to implementing a complete set of delivery performance measures.
|
In the three years since its creation, DHS realized some successes among its various acquisition organizations in opening communication through its strategic sourcing and small business programs. Both efforts have involved every principal organization in DHS, along with strong involvement from the CPO, and both have yielded positive results. DHS’ disparate acquisition organizations quickly collaborated on leveraging spending for various goods and services, without losing focus on small businesses. This use of strategic sourcing—formulating purchasing strategies to meet departmentwide requirements for specific commodities, such as office supplies, boats, energy, and weapons—helped DHS leverage its buying power, with savings expected to grow. At the time of our March 2005 review, DHS had reported approximately $14 million in savings across the department. We also found that the small business program, whose reach is felt across DHS, was off to a good start. In fiscal year 2004, DHS reported that 35 percent of its prime contract dollars went to small businesses, exceeding its goal of 23 percent. Representatives have been designated in each DHS procurement office to help ensure that small businesses have opportunities to compete for DHS’ contract dollars. However, some officials responsible for carrying out strategic sourcing initiatives have found it challenging to balance those duties with the demands and responsibilities of their full-time positions within DHS. Officials told us that strategic sourcing meetings and activities sometimes stall because participants must shift attention to their full-time positions. Our prior work on strategic sourcing shows that leading commercial companies often establish full-time commodity managers to more effectively manage commodities. Commodity managers help define requirements with internal clients, negotiate with potential vendors, and resolve performance or other issues arising after a contract is awarded and can help maintain consistency, stability, and a long-term strategic focus. DHS continues to faces challenges in creating a unified, accountable acquisition organization due to policies that create ambiguity as to accountability for acquisition decisions, inadequate staffing to conduct department-wide oversight, and heavy reliance on interagency contracting in the Office of Procurement Operations, which is responsible for a large portion of DHS’ contracting activity. Achieving a unified and integrated acquisition system is hampered because an October 2004 policy directive relies on a system of dual accountability between the CPO and the heads of the department’s principal organizations. Although the CPO has been delegated the responsibility to manage, administer, and oversee all acquisition activity across DHS, in practice, performance of these activities is spread throughout the department, reducing accountability for acquisition decisions. This system of dual accountability results in unclear working relationships between the CPO and heads of DHS’ principal organizations. For example, the policy leaves unclear how the CPO and the director of Immigration and Customs Enforcement are to share responsibility for recruiting and selecting key acquisition officials, preparing performance ratings for the top manager of the contracting office, and providing appropriate resources to support CPO initiatives. The policy also leaves unclear what enforcement authority the CPO has to ensure that initiatives are carried out because heads of principal organizations are only required to “consider” the allocation of resources to meet procurement staffing levels in accordance with the CPO’s analysis. Agreements had not been developed on how the resources to train, develop, and certify acquisition professionals in the principal organizations would be identified or funded. While the October 2004 policy directive emphasizes the need for a unified, integrated acquisition organization, achievement of this goal is further hampered because the directive does not apply to the U.S. Coast Guard and U.S. Secret Service. The Coast Guard is one of the largest organizations within DHS, with obligations accounting for about $2.2 billion in fiscal year 2005, nearly 18 percent of the department’s total. The directive maintains that these two organizations are exempted from the directive by statute. We disagreed with this conclusion, as we are not aware of any explicit statutory exemption that would prevent the application of the DHS acquisition directive to either organization. We raised the question of statutory exemption with the DHS General Counsel, who shared our assessment concerning the explicit statutory exemptions. He viewed the applicability of the management directive as a policy matter. DHS’ goal of achieving a unified, integrated acquisition organization is in part dependent on its ability to provide effective oversight of component activities. We reported in March 2005 that the CPO lacked sufficient staff to ensure compliance with DHS’ acquisition oversight regulations and policies. To a great extent, the various acquisition organizations within the department were still operating in a disparate manner, with oversight of acquisition activities left primarily up to each individual organization. In December 2005, DHS implemented a department wide management directive that establishes policies and procedures for acquisition oversight. The CPO has issued guidance providing a framework for the oversight program and, according to DHS officials, as of May 2006, five staff were assigned to oversight responsibilities. We have ongoing work in this area and will be reporting on the department’s progress in the near future. The challenge DHS faces overseeing its various components’ contracting activities is significant. For example, in May 2004 we reported that TSA had not developed an acquisition infrastructure, including organization, policies, people, and information that would facilitate successful management and execution of its acquisition activities. The development of those areas could help ensure that TSA acquires quality goods and services at reasonable prices, and makes informed decisions about acquisition strategy. To support the DHS organizations that lacked their own procurement support, the department created the Office of Procurement Operations. In 2005, we found that, because this office lacked sufficient contracting staff, it had turned extensively to interagency contracting to fulfill its responsibilities. At the time of our review, we found that this office had transferred almost 90 percent of its obligations to other federal agencies through interagency agreements in fiscal year 2004. For example, DHS had transferred $12 million to the Department of the Interior’s National Business Center to obtain contractor operations and maintenance services at the Plum Island Animal Disease Center. Interior charged DHS $62,000 for this assistance. We found that the Office of Procurement Operations lacked adequate internal controls to provide oversight of its interagency contracting activity. For example, it did not track the fees it was paying to other agencies for contracting assistance. Since our report was issued, the office has added staff and somewhat reduced its reliance on interagency contracting. Recently, DHS officials told us that the office has increased its staffing level from 42 to 120 employees, with plans to hire additional staff. As reported by DHS, the Office of Procurement Operations’ obligations transferred to other agencies had decreased to 72 percent in fiscal year 2005. To protect its major, complex investments, DHS has put in place a review process that adopts many acquisition best practices—proven methods, processes, techniques, and activities—to help the department reduce risk and increase the chances for successful investment outcomes in terms of cost, schedule, and performance. One best practice is a knowledge-based approach to developing new products and technologies pioneered by successful commercial companies, which emphasizes that program managers need to provide sufficient knowledge about important aspects of their programs at key points in the acquisition process, so senior leaders are able to make well-informed investment decisions before an acquisition moves forward. While DHS’ framework includes key tenets of this approach, in March 2005 we reported that it did not require two critical management reviews. The first would help ensure that resources match customer needs before any funds are invested. The second would help ensure that the design for the product performs as expected prior to moving into production. We also found that some critical information is not addressed in DHS’ investment review policy or the guidance provided to program managers. In other cases, it is made optional. For example, before a program is approved to start, DHS policy requires program managers to identify an acquisition’s key performance requirements and to have technical solutions in place. This information is then used to form cost and schedule estimates for the product’s development to ensure that a match exists between requirements and resources. However, DHS policy does not establish cost and schedule estimates for the acquisition based on knowledge from preliminary designs. Further, while DHS policy requires program managers to identify and resolve critical operational issues before proceeding to production, initial reviews—such as the system and subsystem review—are not mandatory. In addition, while the review process adopts other important acquisition management practices, such as requiring program managers to submit acquisition plans and project management plans, a key practice— contractor tracking and oversight—is not fully incorporated. We have cited the need for increased contractor tracking and oversight for several large DHS programs. While many of DHS’ major investments use commercial, off-the-shelf products that do not require the same level of review as a complex, developmental investment would, DHS is investing in a number of major, complex systems, such as TSA’s Secure Flight program and the Coast Guard’s Deepwater program, that incorporate new technology. Our work on these two systems highlights the need for improved oversight of contractors and greater adherence to a best practices approach to management review. Two examples follow. We reported in February 2006 that TSA, in developing and managing its Secure Flight program, had not conducted critical activities in accordance with best practices for large scale information technology programs. Program officials stated that they used a rapid development method that was intended to enable them to develop the program more quickly. However, as a result of this approach, the development process has been ad hoc, with project activities conducted out of sequence. TSA officials have acknowledged that they have not followed a disciplined life cycle approach in developing Secure Flight, and stated that they are currently rebaselining the program to follow their standard systems development life cycle process, including defining system requirements. TSA officials also told us they are taking steps to strengthen contractor oversight for the Secure Flight program. For example, the program is using one of TSA’s support contractors to help track contractors’ progress in the areas of cost, schedule, and performance and the number of TSA staff with oversight responsibilities for Secure Flight contracts has been increased. TSA reports it has identified contract management as a key risk factor associated with the development and implementation of Secure Flight. The Coast Guard’s ability to meet its responsibilities depends on the capability of its deepwater fleet, which consists of aircraft and vessels of various sizes and capabilities. In 2002, the Coast Guard began a major acquisition program to replace or modernize these assets, known as the Deepwater program. Deepwater is currently estimated to cost $24 billion. We have reported that the Coast Guard’s acquisition strategy of relying on a prime contractor (“system integrator”) to identify and deliver the assets needed carries substantial risks. We found that well into the contract’s second year, key components for managing the program and overseeing the system integrator’s performance had not been effectively implemented. As we recently observed, the Coast Guard has made progress in addressing our recommendations, but there are aspects of the Deepwater program that will require continued attention. The program continues to face a degree of underlying risk, in part because of the unique, system-of-systems approach with the contractor acting as overall integrator, and in part because it is so heavily tied to precise year-to year funding requirements over the next two decades. Further, a project of this magnitude will likely continue to experience other concerns and challenges beyond those that have emerged so far. It will be important for Coast Guard managers to carefully monitor contractor performance and to continue addressing program management concerns as they arise. In closing, I believe that DHS has taken strides toward putting in place an acquisition organization that contains many promising elements. However, the steps taken so far are not enough to ensure that the department is effectively managing the acquisition of the multitude of goods and services it needs to meet its mission. More needs to be done to fully integrate the department’s acquisition function, to pave the way for the CPO’s responsibilities to be effectively carried out in a modern-day acquisition organization, and to put in place the strong internal controls needed to manage interagency contracting activity and large, complex investments. DHS’ top leaders must continue to address these challenges to ensure that the department is not at risk of continuing to exist with a fragmented acquisition organization that provides stopgap, ad hoc solutions. DHS and its components, while operating in a challenging environment, must have in place sound acquisition plans and processes to make and communicate good business decisions, as well as a capable acquisition workforce to assure that the government receives good value for the money spent. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information regarding this testimony, please contact Michael Sullivan at (202) 512-4841 or [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Homeland Security (DHS) has some of the most extensive acquisition needs within the U.S. government. In fiscal year 2005, the department reported that it obligated almost $17.5 billion to acquire a wide range of goods and services. DHS's acquisition portfolio is broad and complex, including procurements for sophisticated screening equipment for air passenger security; technologies to secure the nation's borders; trailers to meet the housing needs of Hurricane Katrina victims; and the upgrading of the Coast Guard's offshore fleet of surface and air assets. This testimony summarizes GAO reports and testimonies, which have reported on various aspects of DHS acquisitions. It addresses (1) areas where DHS has been successful in promoting collaboration among its various organizations, and (2) challenges it still faces in integrating the acquisition function across the department; and (3) DHS' implementation of an effective review process for its major, complex investments. The information in this testimony is based on work that was completed in accordance with generally accepted government auditing standards. Since its establishment in March 2003, DHS has been faced with assembling 23 separate federal agencies and organizations with multiple missions and cultures into one department. This mammoth task involved a variety of transformational efforts, one of which is to design and implement the necessary management structure and processes for the acquisition of goods and services. We reported in March 2005 that DHS had opened communication among its acquisition organizations through its strategic sourcing and small business programs. With strategic sourcing, DHS' organizations quickly collaborated to leverage spending for various goods and services--such as office supplies, boats, energy, and weapons--without losing focus on small businesses, thus leveraging its buying power and increasing savings. Its small business program, whose reach is felt across DHS, is also off to a good start. Representatives have been designated in each DHS procurement office to ensure small businesses can compete effectively for the agency's contract dollars. We also reported that DHS' progress in creating a unified acquisition organization has been hampered by policy decisions that create ambiguity about who is accountable for acquisition decisions. To a great extent, we found that the various acquisition organizations within DHS were still operating in a disparate manner, with oversight of acquisition activities left primarily up to each individual organization. DHS continues to face challenges in integrating its acquisition organization. Specifically, dual accountability for acquisitions exists between the Chief Procurement Officer (CPO) and the heads of each DHS component; a policy decision has exempted the Coast Guard and Secret Service from the unified acquisition organization; the CPO has insufficient capacity for department-wide acquisition oversight; and staffing shortages have led the Office of Procurement Operations, which handles a large percentage of DHS's contracting activity, to rely extensively on outside agencies for contracting support--often for a fee. We found that this office lacked the internal controls to provide oversight of this interagency contracting activity. This last challenge has begun to be addressed with the hiring of additional contracting staff. Some of DHS's organizations have major, complex acquisition programs that are subject to a multi-tiered investment review process intended to help reduce risk and increase chances for successful outcomes in terms of cost, schedule, and performance. While the process includes many best practices, it does not include two critical management reviews, namely a review to help ensure that resources match customer needs and a review to determine whether a program's design performs as expected. Our prior reports on large DHS acquisition programs, such as the Transportation Security Administration's Secure Flight program and the Coast Guard's Deepwater program, highlight the need for improved oversight of contractors and adherence to a rigorous management review process.
|
Newborn screening for heritable and other conditions begins with a provider collecting a blood specimen from a newborn within a few days of birth. The newborn’s heel is pricked to obtain a few drops of blood, which are placed and dried on a specimen collection card, and then sent to a state lab for testing. (See fig. 1 for an example of a collection card.) State departments of health may use their own lab to test newborn screening specimens or may contract with a private lab, a lab at a university medical school, or another state’s lab. After testing, lab staff notify providers of either normal results or presumptive positive results, which indicate that a newborn may have a heritable condition, subject to follow-up testing to determine if the condition is truly present. Lab staff may report presumptive positive results to providers by, for example, fax or phone call before sending all normal and presumptive positive results. The newborn screening process involves collaboration between providers and other hospital staff, lab staff, and state newborn screening officials. Providers and other hospital staff are responsible for ensuring that newborn screening specimens are collected and sent to the state lab for testing. Lab staff and follow-up staff, such as nurses and social workers, are responsible for entering demographic data associated with the specimen into the state’s laboratory information management system (LIMS), testing the newborn screening specimen, and reporting results to providers. Newborn screening officials at state departments of health support providers and labs with education, data, and resources. HHS’s Advisory Committee on Heritable Disorders in Newborns and Children, which was chartered to recommend newborn screening improvements in states and provide technical information and advice about newborn screening to the Secretary of Health and Human Services, established a Recommended Uniform Screening Panel (RUSP), which is a list of conditions for which newborns should be screened. A 2005 report prepared for the advisory committee to make recommendations for the RUSP also identified time-frame goals for individual stages of the newborn screening process, such as from specimen collection to arrival at the lab, for the conditions on the RUSP. Subsequently, in response to a public comment during a committee meeting in September 2013, the advisory committee took additional steps to address newborn screening timeliness concerns: In 2014, the advisory committee designated 16 of 32 conditions on the RUSP as “time-critical” conditions. These are conditions for which acute symptoms or potentially irreversible damage could develop in the first week of life, and for which early recognition and treatment can reduce the risk of illness and death. Also in 2014, the advisory committee, in conjunction with APHL, conducted a survey and issued its 2014 Newborn Screening Timeliness Survey Report, which included information on barriers to and strategies for newborn screening timeliness identified by newborn screening officials in 51 states. In April 2015, the advisory committee sent a letter to the Secretary of Health and Human Services with new time-frame goals. For example, the 2015 letter included time-frame goals for the full newborn screening process (from birth to results reporting) rather than from specimen collection to results reporting. Additionally, the 2015 letter added different time-frame goals for time-critical and non-time- critical conditions, and shortened the time-frame goal for a specimen arriving at the lab from 3 days after collection to 24 hours after collection. The advisory committee’s 2015 time-frame goals included recommended time frames for completing the full newborn screening process—that is, from birth to results reporting: All newborn screening results should be reported for all conditions to a provider as soon as possible, but no later than 7 days after birth. Presumptive positive results for time-critical conditions should be reported immediately to a provider, but no later than 5 days after birth. Presumptive positive results for all non-time-critical conditions should be reported to a provider as soon as possible, but no later than 7 days after birth. The advisory committee’s 2015 time-frame goals also include time frames for the first two newborn screening stages (from birth to specimen collection and from specimen collection to lab arrival) to help states achieve the goals for the full process; the committee did not identify a time-frame goal for the third newborn screening stage (lab arrival to results reporting): 1. Newborn screening specimens should be collected in the appropriate time frame for the newborn’s condition, but no more than 48 hours after birth. 2. Newborn screening specimens should arrive at the lab as soon as possible; ideally within 24 hours of collection. Finally, the advisory committee encouraged states to benchmark progress by meeting each of these time-frame goals for at least 95 percent of specimens by 2017. (See fig. 2 for more information about the 2015 time-frame goals.) HRSA’s Maternal and Child Health Bureau has responsibility for enhancing, improving, and expanding the ability of states to provide newborn screening. HRSA oversees a number of programs that provide resources to improve newborn screening quality and increase newborn screening education. Following the enactment of the Newborn Screening Saves Lives Reauthorization Act of 2014, some of these programs focused on newborn screening timeliness. One of these programs is NewSTEPs, which is administered by APHL under a cooperative agreement. NewSTEPs began in 2012 to offer a forum for collaboration among state newborn screening officials and other stakeholders; to facilitate continuous quality improvement and data-driven outcome assessments through a data repository; and to create a national newborn screening technical assistance center that provides training, addresses challenges, and supports program improvement through partnerships with newborn screening stakeholders. In 2013, NewSTEPs launched its data repository to collect annual newborn screening data from participating states. To participate in the data repository, states must sign an MOU with APHL; 35 states had signed an MOU as of November 20, 2016, according to HHS. In response to requirements in the Newborn Screening Saves Lives Reauthorization Act of 2014 for HRSA to support timely newborn screening, NewSTEPs updated the data repository to collect timeliness data from participating states that are consistent with the advisory committee’s 2015 time-frame goals. For example, a state’s data in the repository include the percentage of specimens for which all results for all conditions were reported within the advisory committee’s goal of 7 days after birth. NewSTEPs can use each state’s reported percentage for a given time-frame goal to monitor the state’s progress toward meeting the advisory committee’s 95 percent benchmark in a given year—that is, whether screening was completed within a time-frame goal (e.g., 7 days) for 95 percent of a state’s specimens. Most of the states with signed MOUs began entering timeliness data into the data repository in mid- 2016. In addition to incorporating timeliness data in NewSTEPs’ data repository, HRSA oversees NewSTEPs 360, a program that provides technical assistance and collects monthly data on newborn screening timeliness through grants to states. Administered by the University of Colorado’s School of Public Health, in collaboration with APHL under a cooperative agreement with HRSA, NewSTEPs 360 aims to improve timeliness in newborn screening by providing quality improvement training. For example, according to officials involved in administering the program, NewSTEPs 360 holds monthly quality improvement coaching calls intended to help each participating state develop innovative strategies that focus on timeliness barriers. In addition, participating states enter monthly timeliness data into the data repository. Twenty-eight states were participating in this program, as of October 26, 2016. Most states that reported timeliness data to NewSTEPs had not met the advisory committee’s 95 percent benchmark for newborn screening timeliness. Missing data for several states and variations in data collection limit a full understanding of newborn screening timeliness trends, but HRSA has been taking steps to address these challenges. Most states that reported 2015 timeliness data (the most recent data available) to NewSTEPs had not met the advisory committee’s 95 percent benchmark for newborn screening timeliness for all conditions within 7 days. However, timeliness for completing this screening process improved over time for the majority of states. Most states reporting 2015 timeliness data to NewSTEPs, which collects annual newborn screening data from states, had not met the advisory committee’s 95 percent benchmark for completing the full newborn screening process (stages 1 through 3) for all conditions within 7 days. According to the advisory committee’s benchmark, by 2017, states should report newborn screening results for all conditions within 7 days of birth, for at least 95 percent of specimens. In 2015, 5 of the 27 states reporting timeliness data for this measure met this 95 percent benchmark. (See fig. 3.) States’ timeliness for completing the full newborn screening process for all conditions improved over time. The number of states meeting the benchmark was higher in 2015 compared to the previous 3 years. Likewise, the median percentage of specimens screened within 7 days was higher in 2015 than in the previous 3 years. (See table 1.) According to NewSTEPs, there were 21 states that demonstrated improvement from 2012 to 2015. In 2015, states also had not met the advisory committee’s benchmark for timely reporting of presumptive positive results for time-critical conditions. According to this benchmark, by 2017, states should report these results for 95 percent of specimens within 5 days of birth. In 2015, none of the 16 states that reported on this measure met the 95 percent benchmark. (See fig. 4.) The states’ data for reporting presumptive positive results for time-critical conditions did not indicate consistent improvement over time. The median percentage of specimens screened for time-critical conditions within 5 days increased from about 23 percent in 2012 to 28 percent in 2014, but decreased to about 24 percent in 2015. NewSTEPs noted that reporting results for time-critical conditions within 5 days of birth may be the most important time-frame goal, and while the data indicate that states had difficulty meeting this goal in 2015, the data from 2014 indicate that achieving timely reporting for a high percentage of specimens is possible. For example, in 2014, two states reported meeting the 95 percent benchmark for time-critical conditions. (See table 2.) NewSTEPs also noted that time-frame goals specifically for time-critical conditions were not in place before April 2015 (when the advisory committee recommended the current time-frame goals). For non-time critical conditions in 2015, 2 of 16 states reporting on this measure had met the benchmark of reporting presumptive positive results within 7 days of birth for at least 95 percent of specimens. (See fig. 5.) The states’ data for reporting presumptive positive results for non-time- critical conditions did not indicate consistent improvement over time. The median percentage of specimens screened for non-time-critical conditions within 7 days decreased from about 52 percent in 2012 to about 49 percent in 2013, but then increased to about 52 percent in 2014 and about 55 percent in 2015. (See table 3.) States that reported timeliness data to APHL generally had not met the advisory committee’s 95 percent benchmark for stage 1. In 2015, 10 out of 35 states that reported timeliness data for this stage had 95 percent of specimens collected within 48 hours of birth—a stage 1 goal. An additional 5 states did not meet the benchmark, but were close. (See fig. 6.) The median percentage of specimens collected within 48 hours of birth was about 93 percent, meaning that half of the states reported having about 93 percent or more of the specimens collected within 48 hours. (See table 4.) In contrast to stage 1, none of the 34 states that reported stage 2 data for 2015 approached the 95 percent benchmark. (See fig. 7.) The median percentage of specimens arriving at the lab within 24 hours of collection was about 7 percent, meaning that half of the states reported having 7 percent or fewer of the specimens arriving at the lab within 24 hours of collection. (See table 5.) The NewSTEPs August 2016 report noted that the advisory committee’s 24-hour goal for specimen arrival is ambitious. NewSTEPs also measured the percentage of specimens each state reported arriving at that lab within 48 hours of collection and found that the median percentage of specimens arriving at the lab within that more generous time-frame goal was higher (about 53 percent). Missing data for a number of states limit a full understanding of newborn screening timeliness trends. The NewSTEPs August 2016 report included annual timeliness data for 38 states, but did not include any data for 15 states. According to APHL officials, none of the states expressly declined to provide data to NewSTEPs for the August 2016 report; however, some states did not respond to the data request, and some states’ officials indicated that they were willing to provide data, but could not do so in time, citing resource constraints. In addition, the 38 states that provided data did not do so for all time-frame goals or all years (2012 through 2015). APHL officials told us that the lack of data for certain time- frame goals or years was due to factors such as competing priorities or limitations in states’ information systems, specifically in LIMS. For example, APHL reported that two states do not electronically capture the date and time that test results are reported to providers, and newborn screening officials in those states could not search paper records in time to provide the data. Additionally, a few states had recently changed their LIMS, which resulted in limited access to data for some years. Variations in data collection also limit a full understanding of newborn screening timeliness trends. According to APHL officials, the data in the NewSTEPs August 2016 report generally represent with accuracy the time taken to screen specimens in reporting states, but there are a number of limitations, including the following examples: Although the advisory committee’s time-frame goals apply to first specimens only, some states’ data did not distinguish a lab’s receipt of a first specimen from receipt of a subsequent specimen, which can result in the appearance of longer screening times (that is, longer times from birth to specimen collection and birth to reporting results) for such states. Variation exists in how state labs define specimen arrival at the lab, which can be the time a specimen is delivered by a courier, the time lab staff record receipt of the specimen in LIMS, or the time lab staff initiate testing of the specimen. This variation can affect the data reported for stage 2 (specimen collection to lab arrival). Many states’ LIMS do not allow lab staff to record separate dates for when results for time-critical conditions and results for non-time- critical conditions from the same specimen card were reported to providers, even though time-critical results may be reported earlier. These systems typically include data entry fields that capture only the date that all results (presumptive positive and normal) for all conditions (time-critical and non-time-critical) were reported to providers, which can result in the appearance of longer newborn screening times for states with such systems. With HRSA’s support, NewSTEPs has been taking steps to improve the completeness and consistency of the annual newborn screening timeliness data that states submit to the data repository. APHL officials told us that they expect to have all 53 states sign the MOU and enter data into the data repository. As participation in the data repository increases and data definitions are used more consistently across states, NewSTEPs can more accurately assess timeliness in states across the country. Steps taken by NewSTEPs include Increasing participation in the data repository. According to HHS, as of November 20, 2016, 35 out of 53 states had signed an MOU with APHL to provide annual data to the data repository for future analysis, and, therefore, receive data-related technical assistance from NewSTEPs. APHL officials told us they have been working with the remaining 18 states to address issues, such as confidentiality concerns, in an effort to have the MOUs signed. APHL officials say they have been reaching out to achieve buy-in from the remaining states through a variety of methods, including sending emails; making phone calls; conducting in-person meetings; incorporating reminders in webinars on newborn screening; and engaging with organizations, such as the American Academy of Pediatrics. APHL officials told us that while the data in the NewSTEPs August 2016 report provide a meaningful understanding of timeliness in a large number of states, they expect this understanding to improve as more states sign MOUs and submit data to the data repository. Clarifying data definitions. NewSTEPs has also reviewed the data definitions used for the data repository to address variability in data collection and reporting among states. APHL officials said that, as a result of this review, NewSTEPs revised guidance documents for its data dictionary to, for example, more clearly separate screening timeliness data for first specimens from data for subsequent specimens. NewSTEPs is working with states participating in the program to help ensure they use these revised definitions consistently when submitting timeliness data to the data repository. APHL officials told us they plan to publish state-specific reports on the NewSTEPs website by early 2017 to promote continuous quality improvement by allowing states and others an opportunity to review states’ progress toward meeting the advisory committee’s benchmarks. According to these officials, each state will be able to track its progress on a specific time-frame goal over time, as well as examine how its timeliness compares to that of other states. State newborn screening officials identified numerous barriers to timeliness in each of the three stages of the newborn screening process and developed a variety of strategies to address these barriers. HRSA, through its cooperative agreement for NewSTEPs 360, has been funding activities to provide technical assistance to states to address barriers and improve the timeliness of newborn screening. Newborn screening officials from 51 states who responded to the advisory committee’s 2014 survey identified numerous barriers to timeliness in each of the three stages of the newborn screening process. Examples of barriers include a lack of understanding of the importance of timely screening among providers performing out-of-hospital births (stage 1), limited courier availability (stage 2), and insufficient lab operating hours (stage 3). Newborn screening officials in selected states told us they developed a variety of strategies to address these barriers. Newborn screening officials who responded to the 2014 survey identified barriers to timely collection of newborn screening specimens. Barriers included nursing protocols that are not always consistent with advisory committee time-frame goals, lack of feedback to hospitals on timeliness performance, and lack of understanding of the importance of timely screening for out-of-hospital births. (See table 6.) Newborn screening officials in the four selected states we interviewed reported developing strategies to address the barriers. Newborn screening officials from selected states reported that following nursing protocols that are inconsistent with the advisory committee’s time- frame goals can cause delays. For example, according to newborn screening officials in one state, nursing protocols often dictate that specimen collection be performed as late as possible prior to the baby’s discharge from the hospital. According to these officials, this protocol can result in late collection of some specimens; for example, among newborns born via Caesarean-section, who often have longer hospital stays. To improve the timeliness of specimen collection, this state recommended that hospitals make nursing protocols consistent with the advisory committee goal to collect specimens within 24 to 48 hours of birth, and has developed educational strategies to advise providers to aim for collection to take place within 24 hours of birth. A lack of feedback from state newborn screening officials to hospitals was also identified as a barrier to timely specimen collection, according to newborn screening officials who responded to the 2014 survey, because providers may be unaware that they are not meeting timeliness goals. Officials we interviewed from three states reported developing or improving methods of providing feedback to hospitals through online quality reports or report cards. For example, newborn screening officials in one state we interviewed said they provide feedback to hospitals through report cards that evaluate hospital performance based on the advisory committee’s goal for timely specimen collection. The report cards are disseminated monthly and include an outlier report that alerts facilities when specific specimens do not meet the timeliness goal. According to the officials, these outliers showed problems with timeliness at neonatal intensive care units (NICU). As a result, the state began reporting NICU timeliness separately on the report cards; subsequently, newborn screening officials reported that NICU timeliness has improved. Arizona’s Efforts to Address Barriers to Timely Screening for Out-of-Hospital Births Newborn screening officials in Arizona have focused on improving the timeliness of newborn screening for out-of-hospital births. Generally, according to newborn screening officials, babies born outside of a hospital—such as in birthing centers (freestanding facilities separate from hospitals) and home births—have higher rates of delayed specimen collection. Newborn screening officials in Arizona attributed this to a variety of causes, including a lack of understanding among providers about the importance of timely screening, and a lack of standardization of protocols for birthing centers and home births. For example, for healthy, low-risk deliveries outside of hospitals, midwives often leave 4 hours after the baby is born and may not follow up during the period when specimen collection should occur. Since 2011, Arizona has provided education and basic background training on newborn screening to midwives individually and through the state’s midwifery association to help address this barrier. Newborn screening officials explained that following the increased outreach and training, newborn screening timeliness for out-of-hospital births has improved, but noted that midwives continue to have problems seeking reimbursement for newborn screening services. Newborn screening officials who responded to the 2014 survey identified a variety of barriers that may delay the arrival of newborn screening specimens at the state lab, including hospitals and other providers waiting to send specimens to the lab in batches, insufficient lab operating hours, and a lack of courier services for transporting specimens. (See table 7.) One barrier to timely completion of stage 2 (collection to lab arrival) identified by officials responding to the 2014 survey was providers waiting to send specimens to the lab in batches. To address this practice, known as batching, officials from selected states reported employing strategies that involved providing feedback and training to providers at hospitals. According to newborn screening officials in one state, when specimens take more than 3 days from birth to arrive at a lab—which corresponds with the combined time-frame goals for stage 1 and stage 2—hospitals are asked to review those cases and avoid batching in the future. Another barrier identified by officials responding to the 2014 survey is that lab staff are not always available to receive newborn screening specimens, because the lab’s operating hours do not align with courier service, mail, or other delivery service times. In three of the selected states, state officials told us that they addressed this barrier by having lab staff available on Saturday to receive and test specimens or to ensure they can be tested first thing Monday morning. Officials in one of these states also reported developing a process for cases in which a geneticist believes a baby’s specimen is likely positive for a time-critical condition. Under this process, the baby’s physician calls the state newborn screening program, and a courier or a state health official will pick up the specimen within 2 hours for transport to the lab for immediate testing. Colorado’s Efforts to Expand Courier Service to Mitigate Geographic Challenges Newborn screening officials in Colorado told us that they expanded courier service to all hospitals in 2015 to address barriers to timely arrival of specimens at the state lab after specimen collection. According to these officials, prior to 2015, rural hospitals facing geographic challenges, such as long distances to the state lab, relied on mail services to transport newborn screening specimens to the lab. Beginning in April 2015, the newborn screening program’s courier service was expanded to all hospitals in the state, including these rural hospitals. Newborn screening officials explained that courier service is particularly beneficial for hospitals located long distances from the state lab, because it can include direct transport from the hospital to a nearby airport, a flight, and direct transport from the airport to the lab. In addition, courier service was expanded to have pick-up 6 days per week for all hospitals. This increase in courier service provides additional opportunities for timely specimen pick up from the hospital for transporting to the lab. According to Colorado newborn screening officials, these efforts have reduced specimen transport time for some facilities by up to 3 days. Newborn screening officials who responded to the 2014 survey identified barriers to timely results reporting, such as insufficient lab operating hours and labs’ reliance on the mail to communicate results. (See table 8.) In addition to affecting timely specimen arrival at the lab, a lab’s operating hours may also affect how quickly staff are available to test specimens and report results. Officials from one state told us that their strategy to address this barrier included expanding lab operating hours to 6 days a week: specimens are processed Monday through Saturday, allowing the lab to report results for time-critical conditions to providers on Sunday instead of waiting until Monday. Additionally, according to some state newborn screening officials we interviewed, another barrier to timely reporting is that some labs report results to providers via mail; as a result, providers could wait up to a week to receive results after they are sent. Newborn screening officials in one state told us that they updated provider records to include fax numbers and began faxing newborn screening results to providers. Newborn screening officials in two other states told us that they are beginning or planning to report presumptive positive results electronically prior to sending them by mail; for example, seven hospitals in one of these states are piloting a program that allows providers to electronically access results as soon as screening tests are completed at the lab. Wisconsin’s Efforts to Improve Its Laboratory Information Management System Newborn screening officials in Wisconsin reported updating their laboratory information management system (LIMS) to align with newborn screening quality indicators to provide better feedback on newborn screening timeliness to hospitals, and to allow electronic messaging between hospitals and labs in the future, increasing record accuracy and reducing the need for manual entry. State newborn screening officials told us that to align LIMS with newborn screening quality indicators, they added new fields to LIMS to accurately measure time taken to complete the stages of the newborn screening process. For example, they explained that by adding a field in LIMS to record the time a specimen was received at a lab they can more accurately measure the amount of time between specimen collection and receipt at the lab. Newborn screening officials said that with this feedback, hospitals should be able to better identify changes needed to improve timeliness. Wisconsin also reported updating its LIMS to meet Health Level 7 standards, known as HL7, which provide a framework for health information retrieval and exchange from one information system to another (in this case from hospitals’ information systems to LIMS). Wisconsin newborn screening officials are working with hospitals to standardize information in their electronic health information systems so that newborn screening tests can be ordered electronically and LIMS can automatically retrieve and exchange demographic and other information from hospital systems, reducing the need for manual entry of information and increasing accuracy. Finally, Wisconsin officials told us that they are creating a web portal that allows providers to access the newborn screening results in LIMS for their patients online in real-time, reducing the amount of time taken to report newborn screening test results. HRSA, through its cooperative agreement for NewSTEPs 360, has recently focused on improving newborn screening timeliness by funding activities to provide technical assistance to and information sharing among states. According to officials involved in administering NewSTEPs 360, since January 2016, they have held telephone calls to provide coaching to 20 of the 28 states participating in the program; these officials said they expect to begin coaching calls with the remaining 8 states by January 2017. The goal of these coaching calls is to help states achieve the advisory committee’s benchmark of timely reporting of newborn screening results for 95 percent of newborn screening specimens by 2017. According to newborn screening officials from two states that participate in NewSTEPs 360, the coaching calls help states prioritize their efforts to improve newborn screening timeliness and help states hold themselves accountable for meeting milestones, because they report on progress made during each monthly call. In addition, these officials said that participating in NewSTEPs 360 allows them to learn about strategies developed by other participating states. For example, officials in one state told us that they formed a small group of officials from states working to update their LIMS to meet Health Level 7 standards (known as HL7) for electronic health information exchange. (These standards provide a framework for health information retrieval and exchange from one information system to another—in this case from hospital information systems to LIMS.) According to these officials, participation in this small group helped them identify milestones to break their project into manageable pieces. The officials participating in the small group also said that they compared and shared strategies for meeting both these short- term milestones and their overall goal for updating LIMS. Officials involved in administering NewSTEPs 360 told us the program has started taking steps to analyze and share information about barriers and strategies gathered from states that enter monthly timeliness data, and receive technical assistance through the program in order to help identify and promote the use of successful strategies. According to program officials, these steps include the following examples: Sharing NewSTEPs’ August 2016 report on newborn screening timeliness, which contained information on activities that some NewSTEPs 360 states had undertaken to improve timeliness. The report included, for example, strategies for improving newborn screening education for providers. Sharing information on lessons learned (such as factors that may predict newborn screening timeliness in hospitals) through an online video and presenting strategies at the 2016 APHL Newborn Screening and Genetic Testing Symposium. Sharing information in messages sent to states through a listserv. For example, in September 2016, NewSTEPs 360 sent a message summarizing a new cystic fibrosis newborn screening timeliness initiative. For this initiative, NewSTEPs 360, in collaboration with the Cystic Fibrosis Foundation, convened stakeholders to identify strategies for reporting test results for cystic fibrosis in a more timely way. Analyzing monthly timeliness data from states participating in NewSTEPs 360. According to APHL officials, as of November 16, 2016, 20 of 28 states participating in NewSTEPs 360 had entered monthly timeliness data into the data repository and the program had begun analyzing these data to track any progress in these states. These officials told us they expect that the remaining states will begin submitting monthly timeliness data by mid-December 2016. Coding transcripts from the monthly coaching calls for states participating in NewSTEPs 360 to categorize and track barriers experienced by states and strategies developed to address them. As calls are completed over time, officials involved in administering NewSTEPs 360 believe this will allow them to compare the resulting data with the monthly timeliness data to measure the impact of developing a given strategy. These officials told us that they expect to present results from this analysis to all states (regardless of participation in NewSTEPs 360) in 2018. In addition, HRSA also funded targeted technical assistance to help nurses improve newborn screening timeliness through NewSTEPs 360. Under a sub-award from NewSTEPs 360, the Genetic Alliance started providing training to nurses on the importance of timely screening. This includes, for example, free education on newborn screening specimen collection through an online training portal. According to an official at Genetic Alliance, the organization is also involved in identifying barriers that contribute to newborn screening delays and strategies to address such barriers. Based on information gathered in focus groups with nursery and NICU nurses held in June 2016, Genetic Alliance drafted a number of recommendations for hospitals and nurses to help address barriers to timely newborn screening. These recommendations include working with nurses to better integrate updated newborn screening guidance—such as the advisory committee’s 2015 time-frame goal for collecting newborn screening specimens—into nurses’ protocols for newborn screening. It is too soon to determine which strategies, if any, developed through HRSA-supported technical assistance have a measurable impact on improving timeliness in states participating in NewSTEPs 360, and whether these strategies could be effective in additional states. The program began collecting monthly timeliness data from participating states in January 2016, and not all states have started entering data; eight states that were selected to participate in NewSTEPs 360 in October 2016 have not yet started participating in monthly coaching calls. In addition, Genetic Alliance has not yet issued its recommendations to hospitals and nurses. According to HRSA officials, the agency will be conducting annual monitoring of NewSTEPs 360, and a final report that includes performance measures for NewSTEPs 360 is required to be completed by late 2018. HRSA officials told us that the report will capture the extent to which states’ timeliness improved as a result of technical assistance received through the program. We provided a draft of this report to HHS for review and comment. In its written comments, reproduced in appendix III, HHS generally agreed with our data-supported findings, but noted two concerns about the conclusions we have drawn from the findings. First, the department noted concern with our use of the advisory committee’s benchmark, which it encouraged states achieve to by 2017, to assess whether states screened newborns in a timely manner. We report that data provided by 38 states for 2012-2015 showed that states generally had not met the advisory committee’s recommended 2017 benchmark of meeting each time-frame goal for at least 95 percent of specimens. Our analysis is of the most recent data available, and our report states clearly that the advisory committee recommended states achieve these goals by 2017. Time-frame goals for completing newborn screening were initially identified in 2005, and concerns about the timeliness of screening date back to at least late 2013. Our analysis indicates that substantial work remains for the majority of states to achieve the recommended benchmark by 2017, based on the latest available information. HHS also commented that our findings were limited by not including point-of-care screening within the definition of newborn screening. In the report, we include information on the 2 conditions—critical congenital heart disease, and hearing loss—that use point-of-care screening, and note that these are 2 of the 32 conditions on the RUSP. However, since these two conditions are not subject to the advisory committee's time- frame goals (which apply to newborn screening using a blood specimen), and NewSTEPs' August 2016 report did not include data on timeliness for these two conditions, we did not include them in the timeliness data in our report or in the description of barriers and strategies. In addition, HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV. To examine what is known about the timeliness of newborn screening for heritable conditions, we reviewed timeliness data from states included in an August 2016 report from the Newborn Screening Technical assistance and Evaluation Program (NewSTEPs). NewSTEPs is administered by the Association of Public Health Laboratories (APHL), in collaboration with the University of Colorado’s School of Public Health, through a cooperative agreement with the Health Resources and Services Administration (HRSA), an agency within the Department of Health and Human Services (HHS). The August 2016 report included data collected from states through a data repository maintained by APHL under NewSTEPs. The data repository includes (but is not limited to) annual timeliness data collected from states participating in NewSTEPs. These annual timeliness data are based on time-frame goals recommended by HHS’s Advisory Committee on Heritable Disorders in Newborns and Children in April 2015 and data definitions developed by a workgroup composed of newborn screening experts and stakeholders convened by APHL. For individual newborn screening stages (e.g., specimen collection to lab arrival) or the full newborn screening process, the data measure the percentages of a state’s specimens screened within the advisory committee’s time-frame goals. For example, the data measure the percentage of specimens for which all results for all conditions were reported within the advisory’s committee’s goal of 7 days after birth. For the August 2016 report, NewSTEPs requested annual timeliness data from all 53 states, and 38 states submitted data that were included in this report. (See table 9.) Of the 38 states reporting timeliness data included in the report, 20 states entered data directly into the data repository and 18 additional states did not have a signed memorandum of understanding (MOU) and submitted equivalent data using a spreadsheet provided by NewSTEPs. For all 38 states included in the report, NewSTEPs determined the percentages of specimens screened within each of the committee’s 2015 time-frame goals (for the full newborn screening process or individual stages) in 2012, 2013, 2014, and 2015. The NewSTEPs report also included the median, quartiles, minimum, and maximum percentage meeting the 2015 time-frame goal in each year. We assessed the reliability of the annual data in the NewSTEPs August 2016 report for the purposes of examining what is known about the timeliness of newborn screening by taking several steps. For example, we reviewed spreadsheets sent to states without an MOU and confirmed that the spreadsheets were based on the same definitions as the data repository (used by states with signed MOUs). We also confirmed that the spreadsheets had a built-in mechanism to mitigate data entry errors, such as automatic calculation of percentages. We interviewed officials from APHL and the University of Colorado’s School of Public Health to confirm that the data repository also had mechanisms to reduce risks of error. For example, these officials said the data repository can automatically calculate percentages and identify obvious data errors, such as values over 100 percent. These officials also noted that the data combined from the data repository and spreadsheets were based on common data definitions and that the data were carefully reviewed and searched for outliers before being reported. Based on these steps, we determined that these data were sufficiently reliable for the purposes of our report. The timeliness data for the states providing data for the NewSTEPs August 2016 report are not generalizable to other states, but provided valuable insight on what is known about newborn screening timeliness in the reporting states. In addition to reviewing the annual timeliness data in the August 2016 report, we reviewed time-frame goals from the advisory committee included in an April 2015 letter to the Secretary of Health and Human Services, as well as results from the committee’s 2014 survey, which analyzed timeliness for specimens screened from January through May 2014. We also reviewed documents obtained from APHL officials that describe the data repository, quality indicators, and methods for entering data. We interviewed officials from APHL and the University of Colorado’s School of Public Health to learn about their activities related to timeliness, including how they manage the data repository, and to discuss the August 2016 report. We also interviewed these officials to describe any plans to track newborn screening timeliness in the future. We interviewed newborn screening stakeholders identified by APHL and on HRSA’s website to learn about efforts to track newborn screening timeliness. These stakeholders included a member of the advisory committee who co- chaired a timeliness workgroup, a member of the advisory committee who conducted research related to newborn screening barriers, and officials from associations (Association of Maternal & Child Health Programs, Association of State and Territorial Health Officials, Genetic Alliance, and March of Dimes). Stakeholders stated that NewSTEPs’ work in tracking timeliness through the data repository represented the most comprehensive source of information available for describing what is known about newborn screening timeliness. To examine the barriers identified as contributing to delays in newborn screening for heritable conditions, and strategies being used to address them, we reviewed the advisory committee’s 2014 Newborn Screening Timeliness Survey Report, which included findings from a survey of states intended to assist with assessing policies and practices related to the timeliness of newborn screening. The committee, in conjunction with APHL, fielded the survey in the summer of 2014 to identify barriers to and strategies for timely newborn screening, among other things. The survey asked respondents to identify (1) the extent to which certain barriers (identified by newborn screening experts prior to the survey) impacted the newborn screening timeliness in their state, and (2) the strategies that were ongoing in their state to help the newborn screening system meet the recommendations for timely newborn screening. State officials indicated whether a number of barriers for each stage of the newborn screening process identified in the survey had a “major impact,” “moderate impact,” “minor impact,” or “no impact” on timeliness in their state. State officials could also include in written responses additional barriers impacting timeliness not previously identified. The advisory committee obtained survey responses from newborn screening officials in 51 states, although not all states responded to all questions. We selected certain barriers for which we collected more detailed information from states. To select these barriers, we reviewed state responses in the advisory committee’s 2014 survey report, published presentations from APHL’s 2016 Newborn Screening and Genetic Testing Symposium, and a 2015 report by the Association of State and Territorial Health Officials. We selected the barriers most frequently indicated by survey respondents as having a major or moderate impact on timeliness, as well as additional barriers that were frequently reported in the survey’s written responses, which were grouped into categories in the survey report. We also selected barriers from the survey report that fewer respondents indicated as having a major or moderate impact on timeliness, but which were highlighted by a number of state newborn screening officials in published presentations and reports. We combined barriers that were similar into broader topics. For example, for stage 1, we included barriers related to staff training and turnover into one topic related to staffing issues. We interviewed officials from four selected states to collect more detailed information on the barriers we selected from the survey report. These selected states were Arizona, Colorado, Minnesota, and Wisconsin. We selected these states because, according to our review of documents from APHL and the Association of State and Territorial Health Officials, they were focusing on activities related to newborn screening timeliness, which would allow them to provide in depth information on barriers and strategies. In addition, these four states’ activities related to a range of barriers and strategies. For example, one state focused on improving timeliness for out-of-hospital births, while another state focused on improving its laboratory information management system to provide better feedback to hospitals. Through a combination of written responses and interviews, officials from these states provided more detailed information on how the identified barriers may have contributed to delays in their states, and described strategies they had developed or planned to develop to address these barriers. The results of our review of states are not generalizable to other states, but provided insights on these issues. In addition to collecting information from states on barriers and strategies for timely newborn screening, we reviewed documents from APHL and the University of Colorado’s School of Public Health involved in NewSTEPs and NewSTEPs 360, including NewSTEPs’ August 2016 report (which contained information on activities that some states had undertaken to improve timeliness) and interviewed these officials to clarify or elaborate on information about barriers to timely newborn screening and strategies to address such barriers. Similarly we also reviewed information and interviewed newborn screening stakeholders identified by APHL officials and on HRSA’s website to elaborate on barriers and strategies. These stakeholders included a member of the advisory committee who co-chaired a timeliness workgroup, a member of the advisory committee who conducted research related to newborn screening barriers, and officials from associations involved in efforts to improve newborn screening (such as the Association of State and Territorial Health Officials and Genetic Alliance). Inability to process the amino acid leucine, leading to low blood sugar and accumulations of several organic acids, especially after illness or missed meals. Untreated, can lead to brain damage, mental retardation, coma, and death. Treatment includes a diet low in protein and fat, and high in carbohydrates. Buildup of argininosuccinic acid and ultimately ammonia, leading to brain swelling, coma, and sometimes death. Treatment consists of a low-protein diet, frequent meals, medications to prevent ammonia buildup, nutritional supplements, and sometimes a liver transplant. Periodic episodes of acid buildup, often triggered by illness, which can lead to coma, brain damage, and death. Intravenous treatment to regulate blood sugar and blood acid levels can permit normal development. Buildup of citrulline and ultimately ammonia, which untreated can lead to seizures, coma, brain damage, and death. Treatment with low-protein diet, medications to prevent ammonia buildup, and nutritional supplements to allow normal development. Lack of the liver enzyme needed to convert galactose, a major sugar in milk, into glucose (blood sugar). Galactose then accumulates in and damages vital organs, leading to blindness, severe mental retardation, infection, and death. Milk and other dairy products must be eliminated from the baby’s diet for life. This greatly improves the outlook for affected infants, but risk of mild developmental delays remains. A group of inherited disorders resulting from deficiencies of hormones produced by the adrenal gland. Severe forms of CAH, if undetected and untreated, cause life-threatening salt loss via urine. Treatment includes hormone replacement. Inadequate levels of an enzyme that helps break down the amino acids lysine, hydroxyl- lysine, and tryptophan, which are building blocks of protein. Often unrecognized for up to 18 months until childhood illness triggers onset of symptoms. Without early diagnosis and prompt treatment when needed, can lead to brain damage, low muscle tone, cerebral palsy-like symptoms, and death. A condition in which the body is unable to break down proteins and carbohydrates. People with this condition have trouble using biotin, a vitamin that helps turn certain carbohydrates and proteins into energy for the body. It can lead to a harmful buildup of organic acids and toxins in the body. Early detection and treatment with biotin supplements can prevent the severe outcomes of MCD. Inability to process the amino acid leucine. Can cause coma, brain damage, or death in infancy, or emerge later in childhood after infectious illness. Early diagnosis and treatment with low-protein diet and nutritional supplements allow most children to develop normally. Description Inability to convert certain fats to energy. Symptoms such as feeding difficulties, low blood sugar, and lack of energy can begin soon after birth, and people with this condition may experience heart problems, difficulty breathing, liver failure, and sudden death. Treatment includes a high-carbohydrate, low-fat diet, nutritional supplements, and frequent meals. <1 in 100,000 Genetic metabolic disorder with mild to severe symptoms, which can lead to mental retardation or death. Treatment consists of a special diet, continued indefinitely. Seemingly well infants suddenly develop seizures due to low blood sugar. People with this condition are at risk of seizures, breathing difficulties, liver problems, brain damage, coma, and sudden death. Treatment includes nutritional supplements and frequent meals. Defect in processing four amino acids, resulting in illness in first week of life. Severity varies, but death during first month and lifelong brain damage are common. Treatment includes low-protein diet, vitamin B12 injections, and nutritional supplements. Defect in the processing of four amino acids leading to illness in newborns including brain damage, coma, and death. Even with treatment, which includes a low-protein diet and nutritional supplements, some children have development delays, seizures, increased muscle tone, frequent infections, and heart problems. Seemingly healthy infants can die of what appears to be sudden infant death syndrome. Other infants may develop low muscle tone, seizures, heart failure, and coma, often following illness. Treatment based on frequent meals, a low-fat diet, and nutritional supplements. Inability to convert certain fats to energy. Unless treated, infants often develop heart and liver failure, dying before age one. Treatment includes a high-carbohydrate, low-fat diet, nutritional supplements, frequent meals, and limiting exercise. Defect in processing the amino acid leucine, leading to brain damage, seizures, liver failure, and infant death, or sometimes no symptoms until adulthood. Symptoms may develop after childhood illness. Treatment includes a low-protein diet. An inherited disorder resulting in lack of the enzyme that recycles the vitamin biotin. May cause frequent infections, uncoordinated movement, hearing loss, seizures, and mental retardation. Undiagnosed and untreated, can lead to coma and death. If condition is detected soon after birth, problems can be prevented with oral high-dose biotin. <1 in 100,000 Cells cannot readily absorb carnitine, needed to transfer fatty acids into mitochondria (which supply cells with energy). Results include low blood sugar and sudden death in infancy. Older children may present with progressive heart failure. High-dose carnitine permits normal development. Description Inability to process the essential amino acid phenylalanine, which accumulates and damages the brain. Can lead to severe mental retardation unless detected soon after birth. Treatment includes a special formula and a low-protein diet, continued indefinitely. A group of seven heart defects. Babies born with CCHD are at significant risk of disability or death if not diagnosed soon after birth. A common inherited disorder, resulting in lung and digestive problems, and death by age 35, on average. Early diagnosis and treatment may improve the growth of babies and children with CF. Babies with Pompe disease have trouble breaking down a large sugar called glycogen. Too much glycogen can keep certain organs and tissues, like the heart and muscles, from working properly. Treatment includes enzyme replacement therapy, physical therapy, and respiratory therapy. Sickle cell disease is an inherited disease of red blood cells. Individuals with sickle cell disease have abnormal hemoglobin, the protein inside red blood cells that carries oxygen to every part of the body. Hb S/BTh is a form of sickle cell anemia, in which the child inherits one sickle cell gene and one gene for beta thalassemia, another inherited anemia. Symptoms are milder than for Hb SS, though severity varies. Routine treatment with penicillin may not be recommended for all affected children Sickle cell disease is an inherited disease of red blood cells. Individuals with sickle cell disease have abnormal hemoglobin, the protein inside red blood cells that carries oxygen to every part of the body. Hb S/C is another form of sickle cell disease, in which the child inherits one sickle cell gene and one gene for another abnormal type of hemoglobin. Hb S/C tends to be milder than Hb SS; therefore, treatment with penicillin may not be recommended. Without early testing, most babies with hearing loss are not diagnosed until age two or three. By then, they often have delayed speech and language development. Early diagnosis allows use of hearing aids by six months, helping prevent serious speech and language problems. Lack of an enzyme that converts the amino acid homocysteine into cystathionine, needed for normal brain development. Untreated, leads to mental retardation, eye problems, skeletal abnormalities, and stroke. Treatment consists of a special diet, one or more vitamins (B6 or B12), and other supplements. Inherited vitamin metabolism defect. Can lead to buildup of acids in blood, brain damage, seizures, paralysis, coma, and death. Treatment includes B12 injections and a low-protein diet. Thyroid hormone deficiency that severely retards growth and brain development. Treatment includes thyroid hormone replacement therapy with dietary restrictions. A group of rare inherited disorders characterized by defects in two critical immune system cells that are normally mobilized by the body to combat infections. SCID has also been referred to in the popular media as the “bubble boy disease.” Without treatment, infants with SCID are more susceptible to and can develop recurrent infections, leading to failure to thrive and often death. Sickle cell disease is an inherited disease of red blood cells. Individuals with sickle cell disease have abnormal hemoglobin, the protein inside red blood cells that carries oxygen to every part of the body. Description Lack of an enzyme that causes the byproducts of the amino acid tyrosine, particularly a very toxic compound (succinylacetone), to build up in the liver. Fatal liver and kidney failure may result. Treatment includes dietary restrictions and medication to help protect the brain, liver, and kidneys. In 2014, the advisory committee identified 16 of 32 conditions as “time-critical” conditions. These are conditions in which acute symptoms or potentially irreversible damage could develop in the first week of life, and for which early recognition and treatment can reduce the risk of illness and death. In addition to the contact named above, Kim Yamane (Assistant Director), Hernan Bozzolo (Analyst-in-Charge), Emily Binek, Jazzmin Cooper, Drew Long, Vikki L. Porter, and Jennifer Whitworth made key contributions to this report.
|
Each year, over 12,000 newborns are born with heritable or other conditions that require early detection and treatment. Newborn screening is a state public health activity, and includes the collection of a blood specimen from the newborn, specimen arrival at a state's lab, and results reporting. Barriers at any stage of this process can lead to delays in treatment and potential harm to the newborn. The Newborn Screening Saves Lives Reauthorization Act of 2014 included improving timeliness as an explicit goal for HRSA-supported newborn screening programs, which include technical assistance for and data collection from participating states. The act included a provision for GAO to review newborn screening timeliness. This report examines (1) what is known about the timeliness of newborn screening for heritable conditions; and (2) barriers identified as contributing to screening delays, and strategies used to address them. GAO reviewed time-frame goals from the advisory committee, an August 2016 report from NewSTEPs with an analysis of annual timeliness data from states for 2012 through 2015 (the most recently available data), and a 2014 report on a survey conducted for the advisory committee. GAO also reviewed relevant documents and interviewed officials from NewSTEPs, two advisory committee members who worked on timeliness issues, and newborn screening officials in four states selected because they were focusing on activities related to newborn screening timeliness. Most states that reported timeliness data had not screened newborns within recommended goals to detect conditions that may require treatment. The Department of Health and Human Services' (HHS) Advisory Committee on Heritable Disorders in Newborns and Children recommended time-frame goals in 2015 for newborn screening, such as reporting all results within 7 days of birth. Data provided by 38 states for 2012-2015 showed that states generally had not met the committee's suggested benchmark of meeting each time-frame goal for at least 95 percent of specimens, which the committee encouraged states to achieve by 2017. Missing data and variations in data collection limit a full understanding of timeliness trends, but HHS's Health Resources and Services Administration (HRSA) has supported activities to address these challenges. HRSA supports the Newborn Screening Technical assistance and Evaluation Program (NewSTEPs), which collects newborn screening data. NewSTEPs has been taking steps to improve data for future analysis, such as by clarifying data definitions and working with states to help ensure they use these definitions when submitting timeliness data. State newborn screening officials identified numerous barriers to timely newborn screening, and a variety of strategies to address them. Newborn screening officials who responded to the advisory committee's 2014 survey identified barriers, such as a lack of understanding of the importance of timely screening for out-of-hospital births, limited courier availability to transport specimens to a lab, and insufficient lab hours. Selected state newborn screening officials interviewed by GAO reported developing various strategies to address these barriers. For example, one state increased courier service so rural hospitals located far from the state's lab could shorten specimen transport time. HRSA has been providing states with technical assistance, but it is too soon to determine which strategies developed through this technical assistance, if any, will have a measurable impact on timeliness. In commenting on a draft of this report, HHS generally agreed with the report's findings, but questioned the use of 2017 benchmark goals to measure performance and the exclusion of two conditions. GAO believes its use of the 2017 benchmark and scope were appropriate, as discussed in the report.
|
Anthrax is an acute infectious disease caused by the spore-forming bacterium Bacillus anthracis. The anthrax bacterium is commonly found in the soil and forms spores (like seeds) that can remain dormant in the environment for many years. Human anthrax infections are rare in the United States and are usually the result of occupational exposure to infected animals or contaminated animal products, such as wool, hides, or hair. Although infection in humans is rare, a person can die if airborne anthrax spores are inhaled into the lungs. Once airborne, there is greater possibility that the spores will be inhaled. Medical experts believe that symptoms of inhalation anthrax (sore throat, muscle aches, and mild fever) typically appear within 4 to 6 days of exposure, depending on how the disease is contracted. While anthrax is potentially fatal, individuals who are exposed to anthrax spores will not necessarily develop the disease. Inhalation anthrax can be treated with antibacterial drugs, but medical treatment does not necessarily ensure recovery. Anthrax is not contagious. Anthrax is a potential terrorist weapon because, if refined and introduced into letters and packages, anthrax spores can be released into the air as letters are processed or opened. The use of the mail as a vehicle for transmitting anthrax threatens the nation’s mail stream and places the American public and federal employees at risk. This is what occurred in 2001, when letters containing anthrax contaminated at least 23 Postal Service facilities and killed five of 22 individuals diagnosed with anthrax, including two Postal Service employees. Anthrax spores can be killed, however, through a process known as irradiation, which renders anthrax in the mail harmless for humans. Detecting anthrax involves many types of activities, including developing a sampling strategy for deciding how many samples to collect, where to collect them, and what collection methods to use; collecting samples using, for example, dry or premoistened swabs; transporting samples to laboratories for extraction and analysis; extracting the sample material using specific procedures and fluids (such as sterile saline or water); and analyzing the samples using a variety of methods. To provide a coordinated clinical diagnostic testing approach for detecting anthrax and other bioterrorism threats, CDC, the Association of Public Health Laboratories, the FBI, and others collaboratively developed the Laboratory Response Network (LRN) in 1999. LRN laboratories (1) perform standard testing methods specified by CDC to either rule out or confirm the presence of anthrax and (2) provide public health organizations and others with rapid test results for use in making public health decisions. Generating a final test result involves both a presumptive and confirmatory test. Presumptive tests can be obtained within 2 hours and are considered “actionable” from a public health perspective. According to CDC, antibiotic medical treatment is recommended as soon as possible after the LRN has obtained a presumptive positive test result. Confirmatory tests take longer—generally 24 to 48 hours. The National Response Plan (NRP), which was developed by the federal government under the leadership of DHS, provides one part of the coordinated framework for how the United States will prepare for, respond to, and recover from domestic incidents. The Secretary of Defense, as well as the heads of 31 other federal departments and agencies, signed the Letter of Agreement contained in the NRP, indicating their agreement to abide by the NRP’s incident management protocols. The December 2004 plan includes a Biological Incident Annex, which specifies actions that agencies should take when they become aware of a possible threat involving a biological agent. The annex also identifies the roles and responsibilities of various agencies that would respond to such an event. For example, as specified in the annex, HHS is the primary federal agency for coordinating a public health response involving an actual or potential biological terrorism attack. Table 1 identifies selected agency actions specified in the NRP’s Biological Incident Annex. The other part of the federal framework is the National Incident Management System (NIMS), which was released in March 2004. NIMS is intended to provide a consistent and coordinated nationwide approach for federal, state, and local governments to work effectively and efficiently together to prepare for, respond to, and recover from domestic incidents, including those involving biological incidents, regardless of their cause, size, and complexity. NIMS applies to all levels of government, and for the federal government, including DOD, it is prescriptive. A key component of NIMS is the incident command system, which is designed to integrate the communications, personnel, and procedures of different agencies and levels of government within a common organizational structure during an emergency. Another key component of NIMS is the establishment of a joint information center—with representatives from all affected parties and jurisdictions—to provide a unified communication message to the public during emergencies. GSA and DOD have requirements for agencies to follow in protecting employees in mail facilities and ensuring effective mail operations. For example, GSA’s federal mail management regulation requires every federal agency and agency location with one or more full-time personnel processing mail to have a written mail security plan including, among other things, procedures for safe and secure mail room operations, plans for security training for mail employees, and plans for annual reviews of the agency’s mail security plan and facility-level mail security plans; and large agencies, such as DOD, that spend over $1 million annually on postage to annually (1) verify that facility-level mail security plans have been reviewed and (2) report to GSA that all facility-level mail security plans have been reviewed by a competent authority within the past year. GSA also issues guidance and recommendations for effectively managing mail programs, including recommendations on the content of mail security plans. For example, GSA recommends that agencies develop a communication plan for responding to threats that includes names and phone numbers to call during emergencies; establish and maintain partnerships with personnel who respond to emergencies (first responders); and create a program for training employees on how to respond to biological threats, including refresher training on a regular basis. DOD’s mail manual, effective December 2001, implements DOD’s mail- related requirements. DOD requires its components to comply with GSA’s federal mail management regulation, including the requirement that each mail center develop a written mail security plan and have it reviewed annually by a competent authority. Beyond mail-related requirements, GSA also requires the highest-ranking federal official of the largest agency in GSA-controlled (leased) office space to develop an occupant emergency plan. GSA guidance related to this requirement recommends that the occupant emergency plan describe, among other matters, critical information about the office space and actions to be taken during emergencies. The GAO Comptroller General’s Standards for Internal Control in the Federal Government provides the overall framework for agency management to establish and maintain effective internal control. Establishing effective internal controls is a major part of managing an organization. Such controls include the plans, methods, and procedures to be used to meet an organization’s mission, goals, and objectives by, among other things, monitoring performance, training employees, and ensuring that federal requirements, such as GSA and DOD mail security requirements, are followed. The Pentagon receives its mail from the Postal Service as well as from commercial courier services. The Postal Service irradiates almost all first- class mail delivered to the Pentagon and other federal agencies in the Washington, D.C., area, from its facilities on V Street, N.E. in Washington, D.C. (the V Street Operation). In March 2005, Pentagon mail was delivered from the V Street Operation to a mail-screening facility located within the Pentagon remote delivery facility—a 250,000-square-foot shipping and receiving facility adjoining the Pentagon. Technicians dressed in protective gear then screened the mail over a custom-designed table equipped with four filters intended to capture any particles that might fall from the mail. The table used a negative airflow system that was intended to keep microscopic particles from dispersing back into the mail-screening facility. At the time of the March 2005 incident at the Pentagon, employees of Vistronix Incorporated (Vistronix)—the Pentagon’s mail-screening contractor—collected and sent daily samples from each of the four filters to Commonwealth Biotechnologies Incorporated (CBI)—a private laboratory in Richmond, Virginia. Vistronix subcontracted the daily testing of the Pentagon’s mail to CBI. The opened mail was then shrink-wrapped and quarantined in a secure room until CBI notified Vistronix of negative test results by either fax or e-mail. Upon receipt of negative test results, a Vistronix employee released the mail from quarantine. Once released from quarantine, mail employees placed the mail into mailboxes at the Defense Post Office, where it awaited pickup by Pentagon employees. The TRICARE Management Activity (TMA) mail room at the Skyline Complex received and processed mail differently from the Pentagon. It received a small amount of its mail from the Pentagon, but most of its mail came from a Postal Service facility in Merrifield, Virginia, according to a TMA mail room official. The TMA mail room had a biosafety cabinet, an X- ray machine, and two full-time employees. The biosafety cabinet had a negative airflow system with filters for capturing and holding any particles that fell from envelopes or packages being opened. While the cabinet was used for mail screening, it was not capable of detecting anthrax. The two incidents involving the suspicion of anthrax occurred over several days, but the most significant actions occurred the same day—Monday, March 14, 2005. The Pentagon incident occurred first and was the result of positive test results for anthrax in the mail. The Skyline Complex incident occurred later that day when an alarm sounded on the biosafety cabinet that employees took as a sign that contaminated mail had been passed from the Pentagon to the Skyline Complex. Combined, the incidents set in motion a large-scale response that also affected Postal Service employees and operations. The response ended a few days later, when further testing confirmed that anthrax was not present at either DOD facility or in the mail. Figure 1 shows a chronology of the key actions and organizations involved in the two incidents. The discussion that follows explains each incident in turn. Events leading up to the Pentagon incident began on Thursday afternoon, March 10, 2005. After screening the mail in a facility at the Pentagon remote delivery facility, Vistronix employees routinely collected swab samples from four filters and sent them to CBI for analysis. According to Vistronix’s account of events associated with the incident, about 4:00 p.m. on Friday afternoon, March 11, a representative from CBI informed the Vistronix Director that one of four swab samples collected and tested from Thursday’s mail was positive for anthrax. The Director requested the laboratory to conduct additional testing over the weekend but did not notify Defense Post Office officials of the initial positive test results. On Monday morning, March 14, at about 6:00 a.m., the Vistronix Director informed a member of his staff (the site supervisor) that while additional laboratory results for Thursday’s mail had not yet been received, test results for Wednesday’s mail were negative, and, therefore, Wednesday’s mail was cleared for release. The site supervisor misunderstood the conversation, incorrectly concluding that mail from both days could be released from quarantine, and, consequently, he called his staff to release the mail. At about 6:30 a.m., Thursday’s mail was released, and, shortly thereafter, employees of the Defense Post Office began processing the mail for distribution. According to Vistronix, at about 9:10 a.m., the laboratory notified Vistronix that additional testing of Thursday’s swab sample was also positive. By the time Vistronix notified a Defense Post Office official of the second test result at about 9:25 a.m., an unspecified amount of the mail suspected of containing anthrax had already been picked up and distributed throughout the Pentagon. These developments initiated a wide-ranging response. At about 10:15 a.m., a Defense Post Office official notified the Pentagon Force Protection Agency (PFPA)—the law enforcement agency responsible for protecting people, facilities, and infrastructure on the Pentagon Reservation. In the 2 hours that followed, PFPA shut down the Pentagon remote delivery facility, coordinated with mail officials to identify possible recipients of Thursday’s mail, secured the perimeter around the remote delivery facility with the help of evacuated the majority of the employees from the remote delivery facility to the Pentagon’s former child development center. PFPA continued to lead the response in the hours that followed. The Arlington County Emergency Communications Center sent emergency personnel to the scene after it was notified through official channels at about 10:37 a.m. Emergency personnel typically take charge of incidents when the affected individuals have immediate medical needs. However, when they arrived, they said none of the employees appeared to have symptoms of illness. As a result, PFPA and Arlington County agreed that PFPA would continue to lead the response. According to a DOD timeline of the incident, DOD also attempted to notify the following federal and local offices: 12:10 p.m.: First broadcast message sent to local public safety and emergency management response agencies. 12:15 p.m.: FBI’s Washington Field Office and the Weapons of Mass Destructions Operations Unit at FBI Headquarters. 12:30 p.m.: Office of the Postmaster General—the executive head of the Postal Service . 12:40 p.m.: Department of Homeland Security’s Operations Center. When FBI staff arrived on the scene at about 1:00 p.m., they began to assess the incident’s credibility. According to FBI officials, the totality of the initial evidence suggested a false alarm. First, only one of the four swab samples collected and tested from the filters on Thursday was positive for anthrax. If an actual incident had occurred, FBI officials said, it would have been reasonable to expect that all four samples would have been contaminated because, based on experience gained during the fall of 2001 anthrax attacks, once airborne, anthrax spores disperse over a wide area. In addition, tests conducted on Friday’s mail were negative. FBI officials said that if anthrax had contaminated Thursday’s mail, it would likely have contaminated the entire mail-screening facility, leaving residual spores that also would have been detected in the samples taken from Friday’s mail. While suspicious of a false alarm, the FBI declared the Pentagon remote delivery facility a crime scene based on the evolving response of other agencies and the need to further assess the evidence. During the afternoon hours, two DOD Health Affairs officials responsible for responding to medical issues on the Pentagon Reservation—the Commander of the DiLorenzo TRICARE Health Clinic and DOD’s Assistant Secretary for Health Affairs—began providing medical treatment to (1) employees working at the remote delivery facility where the mail- screening facility was located, (2) Pentagon mail recipients, and (3) the mail-screening technicians. DOD health officials estimate that, in total, they dispensed an initial 3-day course of antibiotics to about 889 potentially affected employees. According to the officials involved, their decision to immediately dispense antibiotics as a precautionary measure was based on the laboratory’s positive test results and their experiences gained in the fall of 2001. DOD’s Assistant Secretary for Health Affairs told us that at about 1:00 p.m., he conferred with the CDC Director about DOD’s medical decision, and that she agreed with the decision. According to the CDC Director, the call was made to inform her about the decision that DOD had already reached. The Director of CDC said that even if the purpose of the call had been to seek her advice on medical treatment options, she could not have offered a medical opinion because of insufficient information, especially with respect to the reliability of the laboratory’s test results. She stressed the need for clear, accurate, and understandable information for making decisions about medical treatment. Such information, she said, is typically developed collaboratively with all appropriate parties involved. After the conversation, she said she contacted the CDC operations center that handles such incidents to ensure that appropriate CDC personnel were aware of the incident. While HHS is the primary agency responsible for a public health response, according to an HHS official, the CDC operations center—not DOD—subsequently contacted the HHS operations center. As officials from additional federal agencies became aware of the incident, several interagency conference calls were held. The first of these calls was convened by HHS officials at about 5:00 p.m. Officials from HHS said the purpose of the conference call was to obtain a basic understanding of what had occurred at the Pentagon (and at the Skyline Complex, where the second incident had already begun), so that decisions could be made on how to respond appropriately. According to HHS and DHS officials, decision makers needed answers to such questions as what analysis had been done, what procedures had been used by the contract laboratory, and how the Pentagon samples had been collected. Obtaining such information was critical to determining whether people had been exposed to anthrax, whether the two incidents were linked, and what the appropriate response should be. However, according to DHS and HHS officials, DOD could not adequately answer these and other questions. On Monday afternoon, DOD took the samples from CBI for analysis to Fort Detrick, located in Frederick, Maryland—the site of two key federal laboratories. The samples arrived at about 5:30 p.m. Over the next few days, the laboratories at Fort Detrick conducted numerous tests of the Pentagon’s samples as well as environmental samples taken from the Pentagon. Late Wednesday evening, results of additional testing indicated that anthrax was not present in samples collected from the Pentagon’s mail-screening facility. Agency officials involved in the response believe that the initial positive test result could have been caused by cross contamination at CBI. The facility reopened on Friday, March 18. The incident at the Skyline Complex began several hours after the Pentagon incident began. At about 10:00 a.m., a TMA employee picked up mail from the Pentagon and, by 11:30 a.m., had distributed some of the mail within the Skyline Complex—a large office complex of privately owned buildings in Fairfax County, Virginia. According to officials at the Skyline Complex, an employee received an urgent telephone call around noon indicating an unspecified problem with the Pentagon’s mail and directing that any mail from the Pentagon be retrieved. The caller did not provide any further explanation, according to the official. TMA mail room employees retrieved the mail they had already delivered, emptied mailboxes, and placed some of the mail in trash bags. About 1:00 p.m., a TMA mail room employee was screening other mail from the Pentagon using the biosafety cabinet when the cabinet’s alarm sounded. According to mail room employees, they made several unsuccessful attempts to telephone the manufacturer and the maintenance contractors for help. In addition, DOD’s manager of the complex told us that she called PFPA for guidance on how the cabinet operated, but the PFPA official was not aware of the type of equipment in use at the complex, and consequently, he was not able to tell her what to do. Finally, at 2:09 p.m., a Skyline employee called the Fairfax County 911 emergency line. Fairfax County emergency responders (fire, police, public health, and hazardous material units) arrived on the scene shortly thereafter. They led the incident over the next few hours and took several actions, including closing the Skyline Complex and securing its exits, shutting off its elevators and air-handling systems, developing and providing health information to occupants, collecting contact information from the occupants, decontaminating some employees who were sheltering in place, and obtaining and testing environmental samples from the complex and attempting to remove filters from the biosafety cabinet in order to perform additional tests. According to Fairfax County responders, they attempted to hold all occupants within the Skyline Complex because they anticipated receiving results of environmental testing Monday afternoon. They explained that having the complex occupants together would help them provide information to the occupants and coordinate any further responses that may be necessitated by the results of the environmental testing. Test results were delayed, however, and the majority of the Skyline Complex employees began to be released. Just prior to this, at about 7:30 p.m., Fairfax County responders began decontaminating 45 of the complex’s employees who were believed to be at high risk for exposure to anthrax. The initial environmental test results—available on Tuesday—were inconclusive and, as a result, Fairfax County and FBI responders collected additional environmental samples for analysis at Fort Detrick. On Tuesday afternoon, DOD dispensed antibiotics to the 45 high-risk employees. This incident began to de-escalate on Tuesday evening as officials learned that the alarm that sounded on the biosafety cabinet used for mail screening indicated only an airflow obstruction, not the presence of anthrax. By Wednesday evening, laboratory results from environmental samples indicated that anthrax was not present at TMA’s mail room in the Skyline Complex. The majority of the Skyline Complex reopened on Thursday, while TMA’s mail room reopened on Friday morning, March 18. A DOD official called the Postmaster General to inform him of the Pentagon incident at about 12:30 p.m. on Monday, March 14, 2005, but neither the Postmaster General nor other Postal Service executive were available to receive the call. The DOD official left a voice-mail message, but according to the Postal Service’s Senior Vice President for Government Relations, the message did not convey any urgency about the potential for anthrax in the mail. Furthermore, by the time Postal Service officials listened to the message, they had already heard about the incident through the local media. At about 5:00 p.m., when Postal Service officials learned at the first interagency conference call that DOD had provided antibiotics to Pentagon employees, Postal Service officials acted quickly to protect their employees who, days earlier, might have processed the mail. Thus, by Monday evening, the Postal Service had suspended operations at its V Street Operation and had immediately begun dispensing antibiotics to its employees. In total, over 160 Postal Service employees were treated for their possible exposure to anthrax. On Tuesday, March 15, the CDC’s National Institute for Occupational Safety and Health provided technical assistance to the Postal Service in designing an environmental testing strategy for the V Street Operation. By Wednesday morning, March 16, results from environmental testing of the V Street Operation were negative for anthrax. The Postal Service reopened the V Street Operation in the afternoon. DOD encountered numerous problems during the two March 2005 incidents. At the Pentagon, these problems primarily involved not following required mail-screening contract provisions and procedures. The failure to follow these requirements resulted in, among other things, the premature release of the potentially contaminated mail that caused the incident at the Pentagon. In addition, the Pentagon’s contract for mail screening lacked a clear provision specifying required testing methods, which resulted in the use of a laboratory whose testing methods were unknown and whose results were not actionable—this, in turn, exacerbated the incident at the Pentagon. At the Skyline Complex mail facility, problems were even more basic, in that required procedures and plans for responding to biohazards and other emergencies were inadequate or absent altogether. Further, at the Pentagon, the federal framework developed to, among other things, help ensure more effective decision making through the coordinated response of all affected parties and decision makers was not fully followed. If the framework had been fully followed, decisions regarding medical treatment of DOD and Postal Service employees may have been improved. Vistronix did not follow contract provisions and mail inspection procedures related to the detection and response to potential biohazard emergencies involving the Pentagon’s mail. The contractor developed procedures for implementing the contract’s mail-screening requirements, which described the process by which mail entering the Pentagon would be inspected, tested, quarantined, and released. DOD approved the procedures, but the contractor failed to follow two key requirements. Mail-screening contractor did not provide timely notification of potential contamination. Both the contract and the approved mail inspection procedures provided specific notification requirements for informing DOD of potential biohazardous situations involving the Pentagon’s mail. The contract required Vistronix to notify PFPA “immediately” if there were any evidence of risk or possible contamination of the mail. Similarly, the mail inspection procedures required PFPA to be contacted (1) within 1 minute of an actual or potential event involving contamination and (2) when a positive test result occurred “at any point” in the testing process. The laboratory informed the Vistronix Director that a sample from Thursday’s mail had tested positive for anthrax on Friday afternoon, March 11. Instead of immediately notifying PFPA as required, however, the Director asked the laboratory to conduct additional tests over the weekend. The contractor did not inform DOD of the suspected mail contamination until after it received the second positive test result on Monday, March 14—about 2-½ days after the notification should have occurred. According to the Vistronix Director, he believed the procedures required them to notify DOD only after a second positive test result. The contractor’s untimely notification created a sense of urgency within DOD to quickly provide antibiotics to its employees—before consulting, as specified in the NRP, with other agencies about the proper medical response. Mail-screening contractor did not quarantine mail until it received negative test results from the laboratory. The contract required Vistronix to quarantine the mail until receipt of negative test results. Similarly, the mail inspection procedures required Vistronix to hold (i.e., “not release for delivery”) the Pentagon’s mail until the laboratory had reported negative test results to Vistronix. The procedures also noted that a positive result “at any point” necessitates sequestering all potentially contaminated mail. Vistronix failed to follow these requirements. Specifically, while the Vistronix Director was aware of an initial positive test result on Friday, he did not ensure that the mail remained quarantined until receipt of negative test results from the laboratory. Instead, miscommunication among Vistronix staff led to the mail’s release several hours before the laboratory informed Vistronix that its weekend test results were also positive for anthrax. The premature release of the potentially contaminated mail resulted in a broad response at the Pentagon, the Skyline Complex, and the Postal Service’s V Street Operation. The testing provision in the mail-screening contract required Vistronix to test samples from the mail-screening equipment in accordance with unspecified “CDC guidelines.” However, Defense Post Office officials— including the contracting officer’s representative who had responsibility for overseeing the contract—told us that they did not identify the specific guidelines to be used and were unaware that the CDC publishes both general testing guidelines, which are available in the public domain, and guidance and protocols for anthrax testing by the LRN, which are available only to LRN laboratories. The officials explained that even if they had known which guidelines DOD expected to be followed, they did not have the technical expertise to determine whether the contract’s testing provision was being followed. Defense Post Office officials further explained that the contract was awarded quickly in 2001 after the nationwide anthrax attacks. Their office was tasked with overseeing the contract, they said, because at that time the office was the “executive agent for mail in the Pentagon”—not because it had any expertise or training on these matters. According to Defense Post Office officials, the lack of technical expertise regarding anthrax at that time contributed to the lack of clarity in the contract’s testing provision. Their lack of expertise also caused them to conclude that CBI met all CDC and federal guidelines, in part, because Vistronix had informed DOD that CBI was a certified CDC laboratory that adhered to CDC guidelines. An independent review of CBI, the subcontract laboratory, sponsored by DOD and conducted in April 2005 found that CBI analyzed the Pentagon’s samples using testing methods that differed from CDC’s guidance and protocols. The review also found that Vistronix’s contract with CBI did not require the laboratory to verify its testing methods. By March 2005, DOD and Vistronix had had 3-½ years to specify its testing requirements for the contract. An unclear contracting provision, combined with the lack of oversight by both DOD and Vistronix, resulted in the use of a laboratory whose testing methods were unknown and whose results were not actionable. The effect of these events was evident when DOD officials could not adequately explain to other agency officials what (1) tests CBI had conducted, (2) methods CBI had used, and (3) the results meant. DOD’s inability to provide adequate answers to these and other crucial questions exacerbated the incident at the Pentagon and slowed the response since officials from other agencies were skeptical of the laboratory’s results. At the Skyline Complex, basic procedures for responding to a biohazardous incident were inadequate or absent for the TMA mail facility in the Skyline Complex. The following three key elements were either inadequate or absent. First, TMA did not ensure that mail room procedures addressed what to do, or whom to notify, when the equipment alarm sounded or that employees were properly trained on the equipment. TMA is responsible for ensuring that adequate procedures are in place and effective training occurs, so that employees can perform their duties competently. Although some procedures were in place at the Skyline Complex, they did not address the capabilities of the biosafety cabinet or what to do if the alarm on the equipment sounded. At the time of the incident, the mail room’s procedures provided, among other things, (1) basic instructions for using the biosafety cabinet, including how to turn the machine on and off and how to open the mail, and (2) information about whom to notify when a suspicious package was discovered. The procedures did not address what the biosafety cabinet did, how it worked, or how to respond to its built-in alarm. The TMA mail manager noted that training on the biosafety cabinet had occurred when the machine was purchased in 2001, but no subsequent training had been conducted. In the meantime, he said, staff turnover and the absence of additional training had led to a lack of understanding about the equipment’s capabilities. In addition, while the procedures specified whom to call if suspicious mail is discovered, the procedures did not address whom to contact when the equipment’s alarm sounded. If procedures were adequate and periodic training had occurred, employees would likely have known that, although the equipment had a negative airflow system with filters for capturing and holding any particles that fell from envelopes or packages being opened within the equipment, it did not detect biohazards and its alarm sounded only to indicate an airflow obstruction. Instead, in conjunction with the phone call indicating an unspecified problem with the Pentagon’s mail, mail room employees assumed the alarm was signaling the presence of biohazards in the mail. Because TMA employees lacked adequate information and training on the equipment, they unnecessarily contacted first responders. Second, neither TMA nor DOD ensured that the required mail security plan was in place. Both TMA and DOD have responsibilities for ensuring that an adequate mail security plan exists for the mail room in the Skyline Complex. GSA’s federal mail management regulation and DOD’s mail manual both require mail security plans for agency mail rooms. According to GSA’s regulation, security plans must include (1) procedures for safe and secure mail room operations, (2) plans for training mail room personnel, and (3) plans for annually reviewing agency and facility-level mail security plans. In addition, DOD’s mail manual requires DOD’s mail room officials to ensure that their mail security plans are coordinated with local security officials. TMA did not develop the required security plan. If TMA had developed a plan and coordinated it with local officials, Fairfax County emergency personnel—the local first responders—may have learned about the biosafety cabinet’s limitations, including the meaning of the equipment’s audible alarm. Furthermore, DOD did not ensure that TMA had developed a plan, or attempt to review it for adequacy, as required. GSA’s federal mail management regulation requires that facility level mail security plans be annually reviewed. Moreover, as specified in the regulation, DOD must annually report to GSA that its mail security plans have been reviewed by a competent authority within the past year. GSA officials noted that DOD’s Official Mail Manager submits a certification form to GSA annually; however, the form does not indicate that DOD’s (1) plans exist and that (2) the plans have been reviewed by a competent authority in the past year. Instead, the form submitted to GSA simply certifies that DOD has the requisite requirements in place. According to DOD’s Official Mail Manager, he cannot certify that all DOD mail rooms have mail security plans or that they have been reviewed by a competent authority because DOD does not have a process in place to ensure that the required reviews take place. He further explained that he lacks the time and resources to review the plans. If TMA and DOD had followed the applicable requirements, the problem that occurred at the Skyline Complex may have been avoided. Third, the Defense Information Systems Agency had not developed an Occupant Emergency Plan. GSA requires agencies of GSA-controlled buildings to have an occupant emergency plan for protecting life and property during an emergency. Critical elements of the plan include (1) evacuation and sheltering-in-place information; (2) contact information and emergency phone numbers; and (3) specific information about the building’s construction, including its floor plans. The highest ranking official of the largest agency in each GSA-controlled building is responsible for developing and maintaining the occupant emergency plan. In March 2005, the Defense Information Systems Agency (Defense Agency) was the largest agency in the Skyline Complex. According to officials from the Defense Agency, they were aware of the agency’s responsibility for developing the occupant emergency plan as early as June 2002. Defense Agency officials had drafted a plan by the time of the incident, but had neither distributed it to other federal occupants of the complex nor coordinated it with first responders. Moreover, employees had not been trained on the plan and affected federal agencies had not agreed to or signed the plan. Officials of the Defense Agency commented that developing an occupant emergency plan takes a great deal of coordination among participating agencies, which prolongs the plan’s completion. The lack of a required occupancy emergency plan contributed to the difficulties that employees and first responders experienced during the incident. For example, first responders had difficulty getting critical information to employees because contact information was not readily available for federal employees in the complex. In addition, since information about the complex was not readily available, some employees were able to exit the complex because Fairfax County police, who had attempted to secure the Skyline Complex, were unaware of all the existing exits. DOD did not fully follow the federal framework for coordinating a response to the potential anthrax incident at the Pentagon; instead, it chose to make decisions on its own. The federal framework is set forth in the NRP and NIMS, which specifies a structured and coordinated approach for involving federal, state, and local agencies in decision making. The unifying element of this framework is the ability to harness the resources of various agencies whose expertise and knowledge help ensure informed decisions about how to proceed in any given situation. While DOD initially followed NIMS when it established its incident command at the Pentagon, as the incident evolved, key aspects of the federal framework were not followed. Here are three examples: First, DOD did not fully follow NRP’s notification structure. NRP’s Biological Incident Annex requires every federal agency to first notify the FBI if it becomes aware of an overt threat involving biological agents. While DOD officials did notify the FBI, it was not until almost 3 hours after they first became aware of the Pentagon’s positive test results. Earlier notification would have likely helped with the evaluation of test results and allowed federal agencies to collectively coordinate a proper course of action, particularly because, as discussed earlier, FBI officials began questioning the incident’s credibility after arriving on scene. The Biological Incident Annex also designates HHS as the federal agency responsible for coordinating a public health response involving bioterrorism threats. DOD officials never notified HHS but, instead, called the Director of CDC to disclose their intention to administer antibiotics to DOD employees. The Director of CDC, not DOD, alerted the CDC operations center, which, in turn, notified HHS’s operations center at about 4:00 p.m. on Monday. As specified in the Biological Incident Annex, once HHS officials were notified of a credible threat, they convened an interagency conference call approximately 1 hour later to coordinate a possible medical emergency response. However, by then, DOD had already begun to administer antibiotics to its employees. As a result, any advice any guidance on (1) medical treatment options or (2) the validity of the laboratory’s test results that other agency officials may have offered were essentially moot. Second, DOD failed to follow NIMS protocols regarding joint decision making. Under NIMS, the incident commander is responsible for the entire response to an incident. To assist with various aspects of a multijurisdictional response, the incident commander is expected to assemble federal, state, and local agencies to serve in a unified command. The unified command includes representatives from all agencies and organizations that have responsibility for, or can provide support to, an incident. Collectively, the unified command is expected to consider and help make decisions on all objectives and strategies related to an incident. At the Pentagon in March 2005, PFPA included federal and local agencies in the response; however, the response structure never matured into a unified command, especially when some decisions—especially those related to medical treatment—were made outside the command structure. DOD essentially had two separate incident responses: PFPA acted as the incident commander for the evacuation and containment of Pentagon employees, while DOD’s Health Affairs made unilateral decisions regarding the employees’ medical treatment. According to local public health officials, DOD did not consult them on the proper course of action regarding whether, or how, to intervene medically. Had information and decisions flowed through a unified command structure, local public health officials could have raised the concerns they had about providing antibiotics without a confirmed LRN test result. Additionally, if medical treatment decisions had been made collaboratively, DOD and local public health officials could have (1) agreed on a strategy for treating potentially affected individuals, including access to additional medication and follow- up treatment; and (2) discussed the potential ramifications of initially providing ciprofloxacin to DOD employees. According to local public health officials, DOD’s initial provision of ciprofloxacin to DOD employees set a precedent that essentially eliminated other antibiotic treatment options, given the health officials’ desire to ensure that potentially affected individuals would be treated consistently. Had medical decisions been made within the context of a unified command, a different decision may have been reached and hundreds of DOD employees—with no, or limited, exposure to potential contamination—may not have received unnecessary medication. Third, DOD did not coordinate the initial public response to the incidents. An important outcome envisioned in the federal framework is effective management of information available to the public. The NIMS structure calls for a joint information center to provide a location for organizations participating in the management of the incident to work together to ensure that timely, accurate, easy-to-understand, and consistent information is disseminated to the public. The joint information center is supposed to have representatives from each organization involved in the management of an incident. DOD did not establish a joint information center at the start of the incidents, and it did not have clear written procedures for doing so. As a result, the public received unclear and inconsistent messages about, among other matters, the source of the anthrax. For example, media accounts reported that mail through the Postal Service caused the incidents when, in fact, the source of possible contamination was unknown. According to the Postal Service, this resulted in unnecessary anxiety among Postal Service workers, their families, and recipients of Postal Service mail. According to DOD health officials responsible for making medical decisions at the Pentagon, they based their medical treatment decision on the experiences they gained from the fall 2001 anthrax incidents. The officials explained that they were very sensitive to what they perceived to be untimely medical decisions reached in the fall of 2001. Consequently, they said they decided to err on the side of caution and quickly distribute antibiotics to employees at the Pentagon and Skyline Complex. Additionally, since the incident occurred on the Pentagon Reservation, DOD officials did not believe that the NRP applied because, in their view, they had the medical authority, expertise, and resources to handle the incident internally. However, other federal officials—including those in DHS and HHS—told us that the NRP was applicable and that DOD should have followed the framework. In addition, CDC guidance emphasizes the need to make risk-based decisions, including those involving dispensing of antibiotics during suspected anthrax incidents. According to the CDC, a risk-based, participatory approach is necessary, in part to limit the number of people who may receive antibiotics before confirmation by the LRN. Since the mail had been quarantined over the weekend, the Pentagon employees most at risk would have been the technicians who had screened the mail the previous week. These persons received antibiotics, but so did hundreds of others who, in our view, would not likely have been exposed until Monday morning, when the Pentagon’s mail was released from quarantine. DOD health officials’ concern about protecting DOD employees from the risk of exposure is clearly understandable. However, DOD’s actions were not consistent with the NRP. Once HHS was contacted by CDC, it began using the notification and response protocols specified in the NRP. In particular, HHS convened the first interagency conference call in which federal participants were able to discuss the laboratory’s test results and raise concerns about the quality of the results. Additionally, CDC was able to address the Postal Service’s concerns about the possible health effects on its employees who may have processed contaminated mail to the Pentagon the previous week. CDC recommended antibiotics for employees of the V Street Operation because (1) of the confluence of the two incidents, which, at the time, were viewed as involving the presence of anthrax; (2) DOD had already started its employees on antibiotics; and (3) the employees could have been exposed to anthrax several days earlier because they process mail to the Pentagon. DOD took numerous actions that address problems related to the Pentagon and Skyline Complex incidents. At the Pentagon, some actions to improve DOD’s mail processing and incident response, such as modernizing the mail-screening facility and changing the laboratory used to test daily samples, were already under way. Other actions, including selecting a new mail-screening contractor and improving procedures for releasing quarantined mail, were a direct response to what occurred. At the Skyline Complex, DOD’s actions included prohibiting the use of equipment for screening mail unless the equipment is being operated within the context of a comprehensive mail-screening program. DOD also commissioned the RAND Corporation to conduct an independent review to examine its response to the incidents. The resulting report, issued in January 2006, contains numerous recommendations which, according to DOD, it has taken action upon. Some of the actions DOD took at the Pentagon were under way before the March 2005 incident. Although the actions were not carried out until later, they reflected decisions that had been previously set in motion to improve mail screening and responses to biological incidents. These actions included the following: DOD transferred oversight of the mail-screening function to PFPA. PFPA assumed oversight of mail-screening from the Department of the Army in August 2005 because, according to DOD officials, PFPA’s strategic mission of providing security and law enforcement at the Pentagon is better aligned with the mail-screening function. According to a PFPA official, planning for the transfer of mail-screening oversight began around January 2005. A gradual transition had been planned, he said, but the Pentagon incident significantly accelerated efforts to implement the transfer of mail-screening oversight responsibilities. DOD modernized the mail-screening facility, refurbished the mail quarantine room, and installed new mail-screening equipment. According to a DOD official, initial planning for these improvements also began around January 2005. PFPA officials stated that the new mail- screening facility and the refurbished quarantine room have improved capabilities that are designed to protect employees and prevent the spread of anthrax. Finally, a DOD official said that the decision to replace the previous mail-screening table with new equipment was based on a 2003 National Academy of Sciences report, which, among other things, raised questions about the table’s ability to detect anthrax in small amounts. PFPA is awaiting the results of a study, which it expects to conclude in May 2006, to evaluate the effectiveness of the changes. DOD changed its testing laboratory. Daily testing of samples from the Pentagon’s mail-screening equipment is now performed by a non-LRN chemical-biological laboratory located on the premises, instead of a contract laboratory. The laboratory is part of PFPA and, according to a PFPA official, was established in January 2005 to help protect the Pentagon from biological threats. The official stated that the original plan was to transfer testing from CBI to the Pentagon’s chemical-biological laboratory in October 2005, after the Vistronix contract expired. However, the transfer was accelerated, occurring instead in March 2005, a few days after the incident at the Pentagon. DOD entered into a memorandum of understanding (MOU) on biological monitoring with other federal agencies. In April 2005, DOD signed an MOU for Coordinated Monitoring of Biological Threat Agents, which was developed prior to the Pentagon incident. DHS, HHS, the Department of Justice (which includes the FBI), and the Postal Service are also parties to the MOU. DHS’s Science and Technology Directorate is responsible for coordinating the implementation of the MOU. The following provisions in the MOU help address the notification, laboratory testing, and medical response problems that arose at the Pentagon: The MOU establishes prompt notification requirements. Specifically, the MOU requires participants to notify the FBI, HHS, and DHS within 1 to 2 hours of positive test results that indicate, with a high degree of confidence, the presence of anthrax or other biological agents. However, according to a DHS Science and Technology Directorate official, such test results only trigger notification and, until confirmed by the LRN, are not considered actionable by HHS, DHS, and others. The MOU requires participating agencies to develop and employ mutually accepted and validated testing methods to confirm biological threats. According to a Science and Technology official, test results produced from these methods will be considered actionable for public health and other response measures, including the administration of medical treatment. He stated, however, that this MOU provision will take time to implement. According to the official, an independent organization is currently performing the extensive testing and analysis needed to evaluate and establish equivalency between the wide array of testing methods employed across agencies. DOD officials stated that the Pentagon’s chemical-biological laboratory— which is not part of the LRN—plans to adopt the testing methods that emerge from the MOU. As a result, if the MOU’s equivalency testing provision is fully implemented, they said, confirmatory positive results from the Pentagon laboratory will be considered equivalent to LRN results and deemed actionable by DHS, HHS, and others for decisions related to the administration of medical treatment. In addition to carrying out actions already in process, DOD also initiated numerous actions in direct response to the problems that occurred at the Pentagon. Several of these actions address the mail-screening contractor’s failure to follow established requirements. Other actions were carried out in response to the RAND review and are intended to better align DOD’s procedures with those in the federal framework for coordinating responses to potential biological threats. The actions are as follows: DOD changed mail-screening contractors, strengthened the new contract, and drafted improved procedures. PFPA selected a new contractor for screening mail at the Pentagon in September 2005. PFPA also developed new contract provisions and drafted new mail inspection procedures to address the previous contractor’s failure to follow established contractual and procedural requirements. Table 2 highlights key changes in the Pentagon’s mail-screening contract provisions and draft procedures. PFPA strengthened the mail-screening contract by requiring the contractor to, among other things, periodically train employees on emergency response procedures and develop an effective quality control program to ensure adherence to contract provisions. In addition, PFPA’s contracting officer representative is required to evaluate the contractor’s performance to ensure that it meets contract requirements. PFPA has also drafted new mail-screening procedures to help ensure the contractor performs in accordance with requirements. The draft procedures require PFPA to, among other things, perform unannounced inspections to ensure that the contractor is properly executing required procedures. As of April 30, 2006, it was unclear when the draft procedures would be finalized; however, according to a PFPA official, the new monitoring measures are already being performed. Effective monitoring of contractor activities and performance is key to maintaining effective agency internal controls. DOD strengthened controls over the release of quarantined mail. The Pentagon’s draft mail inspection procedures require verification of negative test results by representatives from three separate organizations—PFPA, the Defense Post Office, and the contractor—before the mail is released. Table 3 identifies the key steps for releasing quarantined mail, as specified in the draft procedures. Although the mail inspection procedures are still in draft form, these steps are currently being used for releasing the Pentagon’s quarantined mail. The segregation of key duties and responsibilities at this critical juncture in the mail release process reduces the risk of error and, as such, is designed to strengthen the internal controls that were lacking in March 2005. During the incident, inadequate internal controls allowed a single point of failure—in this case, a misunderstanding between two contract employees—to result in the premature release and distribution of quarantined mail that may have been contaminated. This triggered a broad response at the Pentagon and elsewhere. The implementation of rigorous internal controls for releasing the Pentagon’s mail appears likely to prevent similar incidents in the future. DOD commissioned the RAND Corporation to conduct an independent review examining its response to the March 2005 incidents. The review primarily focused on evaluating DOD’s policies and procedures for responding to such incidents and making recommendations for improvement. In November 2005, DOD formed a working group to review and implement recommendations from a draft of the report. The final report was issued in January 2006. DOD drafted new notification procedures for positive test results at the Pentagon. To help address the notification problems that arose during the Pentagon incident, DOD drafted new procedures for notifying appropriate parties of positive test results from the Pentagon’s on-site chemical-biological laboratory. These procedures help implement a recommendation in the RAND report that calls for ensuring timely notification of designated agencies in accordance with the NRP and NIMS. The recommendation was based on findings similar to those identified by GAO. DOD officials stated that the new procedures, while still in draft, are currently being used to respond to potential incidents involving biological contamination at the Pentagon. Figure 2 illustrates DOD’s draft notification procedures for positive test results from the Pentagon’s on- site chemical-biological laboratory. The procedures require Pentagon laboratory officials to immediately notify PFPA of positive test results. Thereafter, PFPA and DOD’s Assistant Secretary of Homeland Defense are responsible for making the required notifications to internal and external parties. According to a DOD official, these notifications should occur immediately in order to meet the 1 to 2 hour time frame specified in the MOU. As prescribed in the NRP, once notified of positive test results, (1) the FBI is responsible for coordinating appropriate confirmatory testing by the LRN and (2) DHS’s operations center is responsible for notifying affected local jurisdictions. DOD’s draft procedures include notification to all agencies specified in the NRP’s Biological Incident Annex, as well as those specified in the MOU. Although not specifically required in either the NRP or MOU, the procedures also include notification to the Postal Service. An official stated that DOD actively worked with DHS, the FBI, and HHS to develop the notification procedures and is continuing to improve them based on agency input, actual events, and the outcome of training exercises. DOD is developing a new policy that defines the roles and responsibilities of senior DOD leadership during incidents at the Pentagon. According to DOD’s Director of Administration and Management, the policy—called an instruction—is being developed and will be based, in part, on NRP’s Biological Incident Annex. He stated that the instruction will detail the health-care responsibilities of DOD leadership involved in making medical treatment decisions and will be consistent with NRP and NIMS protocols. The draft instruction was expected to be tested during a Pentagon training exercise in May 2006 and is to be finalized in the fall of 2006. The development of the instruction directly addresses a recommendation from the RAND review, which arrived at findings similar to ours regarding DOD’s medical decision making. DOD drafted new procedures to help ensure that a joint information center is established. DOD also drafted procedures for ensuring that, consistent with the NIMS framework, a joint information center is established during potential emergency incidents at the Pentagon. During the March 2005 incident, DOD did not establish a joint information center to disseminate timely, accurate, and consistent messages to the public. The RAND report contained a similar finding and recommended remedial actions. In response, DOD drafted procedures that require PFPA, Public Affairs, and Washington Headquarters Services to coordinate in the establishment and operation of a joint information center to disseminate information to the media during incidents at the Pentagon. According to a Washington Headquarters Services official, the draft procedures will be tested during future training exercises at the Pentagon. DOD also took a number of other actions that address the specific problems we described related to the incident at the Skyline Complex. Many of these problems were also raised in the RAND report. DOD’s actions, several of which also affect other DOD-leased facilities, included the following: DOD developed operating conditions for equipment used to screen mail in the national capital region. In January 2006, DOD’s Director of Administration and Management issued a directive prohibiting DOD mail facilities in leased space within the national capital region—including the Skyline Complex—from operating equipment used to screen mail, including biosafety cabinets, unless the facilities meet five specific operating conditions. These conditions include having trained mail screeners to sample equipment for biological agents and an approved laboratory for analyzing the samples. The directive partially addresses a recommendation in the RAND report calling for DOD to develop, evaluate, and ensure that appropriate site-specific screening practices are in place departmentwide. According to the Director, the directive is intended to relay key lessons learned in March 2005—specifically, that equipment for screening mail is ineffective and potentially risky to personnel and facilities when used outside of a comprehensive mail-screening program. The TMA facility at the Skyline Complex did not meet these conditions. Although the agency purchased a new biosafety cabinet for the Skyline Complex, which is similar to the device in place in March 2005, a TMA official stated that the agency is no longer operating the device and is taking steps for its disposal in response to the directive. DOD initiated two efforts to gather information on screening operations in its mail facilities. First, DOD’s Joint Program Executive Office for Chemical-Biological Defense, as part of a plan required by the National Defense Authorization Act for Fiscal Year 2006, gathered some information on equipment used for mail screening in DOD mail facilities nationwide. However, according to a joint program office official, the data is not comprehensive because information was not sought from all applicable facilities. Second, in response to the RAND review, Washington Headquarters Services attempted to identify DOD-leased facilities in the national capital region that screen mail for threats. However, as discussed later, this data collection effort had numerous limitations. DOD developed an occupant emergency plan for the Skyline Complex. In July 2005, the Defense Agency, in conjunction with TMA, issued an occupant emergency plan for the Skyline Complex. The plan was reviewed and deemed adequate by a building management specialist in DOD’s Washington Headquarters Services. The plan includes emergency contact information and information about the complex, such as floor plans, that were not readily available during the March 2005 incidents. In addition, according to a Defense Agency official, the plan has been fully coordinated with Fairfax County first responders, who (1) met with Defense Agency officials to discuss the roles and responsibilities of applicable parties, (2) reviewed the plan, and (3) participated in the emergency training exercises at the Skyline Complex. He also stated that if a similar incident were to occur, the plan would facilitate communications between first responders and Skyline Complex employees. The development of an occupant emergency plan addresses findings in this report as well as recommendations from the RAND review. DOD issued supplemental requirements for developing mail security plans. DOD’s December 2001 mail manual required agency mail rooms to develop security plans, but at the time of the incidents, did not clearly specify what the plans should include or require that they be reviewed. A supplement to the manual, issued in September 2005, requires mail room officials to ensure that their plan (1) details the reporting procedures and responsibilities for handling suspicious mail, (2) has been coordinated with local emergency responders, (3) is disseminated to all mail center staff, and (4) is reviewed for potential revisions at least quarterly. The supplemental requirements refer mail room officials to GSA guidance on handling suspicious mail to assist in the development of adequate security plans. DOD’s actions resolve many of the problems that arose in the March 2005 incidents but not all. One remaining and overarching concern involves whether, despite its actions, DOD will adhere to the interagency coordination protocols in the NRP and NIMS or will revert to the isolated decision-making approach it used at the Pentagon. Other remaining issues include ensuring that DOD (1) facilities have adequate mail security plans in place and (2) mail facilities in the national capital region are appropriately using biosafety cabinets for screening mail. DOD has taken actions to align its procedures with the NRP and NIMS, including the development of an instruction defining the roles and responsibilities of senior DOD leadership during incidents at the Pentagon. The policy instruction is not expected to be finalized until the fall of 2006 and, until then, it is unknown whether it will adequately specify medical treatment responsibilities in accordance with the coordination protocols in the NRP and NIMS. In October 2005, senior DOD health officials told us that they would handle the medical response at the Pentagon in a similar manner if an incident occurred in the future, in part, because they have the authority to do so. In April 2006—more than 1 year after the incident— another senior health official reiterated that DOD has the authority to make final decisions on medical treatment at the Pentagon without collaboration or consultation with other agencies, including HHS. Such views conflict with protocols in both the NRP, which requires an HHS-led coordinated public health response, and NIMS, which prescribes local- level input into decisions affecting their jurisdictions. Until DOD ensures that its senior health officials make medical treatment decisions in accordance with the NRP and NIMS during potential biological incidents at the Pentagon, the problems that occurred in March 2005 remain unresolved. TMA did not have a mail security plan for the Skyline Complex at the time of the incidents, and although federal mail management regulation and DOD’s mail manual require such a plan, it has not subsequently developed one. Until TMA develops a plan and, among other things, coordinates it with local first responders, any future response at the facility may also be hampered. More importantly, it is not known whether other DOD mail facilities also lack plans, or adequate plans, for guiding future responses involving potential biological threats in the mail. As discussed earlier, DOD does not have a process in place to (1) ensure that its mail facilities have mail security plans and (2) verify that each plan has been annually reviewed by a competent authority. Gaps remain in the actions DOD has taken to ensure the appropriate use of biosafety cabinets for mail screening in DOD-leased mail facilities in the national capital region. First, DOD has not ensured that DOD mail facilities in the national capital region are not operating biosafety cabinets outside of a comprehensive mail-screening program. As pointed out in the Director of Administration and Management’s January 2006 directive, using mail- screening equipment in isolation of such a program is ineffective and potentially risky. Second, at the conclusion of our review, DOD still had not identified the number of biosafety cabinets in use in the region. For example, although DOD’s Washington Headquarters Services collected information about facilities in the national capital region that screen mail for threats, its winter 2005 data collection effort was not comprehensive. For example, the office did not attempt to (1) identify whether other biosafety cabinets were being used, (2) determine the conditions under which the equipment is being operated, and (3) collect information on the type and capabilities of other mail-screening equipment being used. Moreover, it appears that numerous DOD mail facilities in the national capital region did not respond to the data request. According to an official from Washington Headquarters Services in April 2006, a follow-up effort was being conducted to gather additional data on mail-screening operations in the region; however, we were unable to obtain specific information regarding the purpose, scope, and status of the effort. Eliminating equipment that is not being used in conjunction with a comprehensive mail-screening program is likely to reduce future false alarms and unnecessary response activities involving the Skyline Complex and other DOD mail facilities in leased space within the national capital region. Mail continues to be a potential venue for terrorism, particularly as an opportunity to strike at the Pentagon—a building of national military significance. DOD has taken aggressive measures to ensure the safety of its employees during a potential biological attack, but the challenge ahead is to ensure that DOD’s components and leadership are sufficiently prepared in the event of another potential incident involving anthrax or other biohazards. Preparation involves having the procedures, plans, and training in place to effectively coordinate the best available knowledge and expertise across the many agencies that will likely be involved. While lessons learned from these two false alarms have largely been implemented, there still is a need to tighten controls in the areas discussed above. To help prepare DOD to effectively respond to future incidents involving the suspicion of biological substances in the mail, we recommend that the Secretary of Defense take the following four actions: Ensure that any future medical decisions reached during potential or actual acts of bioterrorism at the Pentagon Reservation result from the participatory decision-making framework specified in the NRP and NIMS. Ensure that appropriate officials at all of DOD’s mail facilities develop effective mail security plans in accordance with GSA’s mail management regulation and guidance and DOD’s mail manual. Ensure that a competent DOD authority conducts a DOD-wide review of all of its mail security plans. Determine (1) whether biosafety cabinets are being used at mail facilities within DOD-leased space in the national capital region and, if so, (2) whether the equipment is being operated within the context of a comprehensive mail-screening program. If the use of biosafety cabinets does not comply with the criteria specified in the Director of Administration and Management’s January 2006 directive, ensure that the equipment will not be operated. We requested comments on a draft of this report from DOD, GSA, the Department of Justice, HHS, DHS, and the Postal Service. Two of these agencies—DOD and GSA—provided written comments. The agencies’ comments are reprinted in appendixes II and III, respectively. DOD agreed with three of our four recommendations, indicating that it either was implementing, or intended to immediately implement, actions to address these recommendations. Furthermore, while DOD is developing a new policy to define the roles and responsibilities of senior DOD leadership―including those involved in making medical treatment decisions―during incidents at the Pentagon, it only partially agreed with our remaining recommendation, related to the need for DOD to make future medical decisions within the participatory decision-making framework specified in the NRP and NIMS. While commenting that “coordination in such events is highly desirable,” DOD reiterated that it has the “medical authority to act in a timely manner to provide the best possible medical protection for its personnel at potential risk in an incident of this nature.” DOD further commented that the NRP does not alter or impede its ability to carry out its medical authorities and responsibilities. We agree that the NRP does not repeal DOD’s medical powers, authorities, or responsibilities. However, in signing the NRP Letter of Agreement, DOD agreed, among other things, to (1) support NRP concepts, processes, and structures; (2) modify its existing plans to comply with the NRP; and (3) ensure that its operations support the NRP. Thus, in our view, DOD’s medical authorities must be exercised in conjunction with DOD’s responsibilities under the NRP. Had DOD followed such an approach in March 2005, concerns such as the validity of the test results could have been discussed among informed agency officials and the provision of unnecessary medicine to DOD employees at lower risk for exposure may have been avoided. DOD also commented that the NRP was not in effect during these incidents because none of the criteria for an incident of “national significance” had been met. We agree that the December 2004 NRP plan was somewhat ambiguous about when an incident is subject to NRP’s concepts, processes, and structures. However, revisions made in May 2006 clarified that the NRP is “always in effect” and that the plan applies to incidents of lesser severity that may, nevertheless, require some federal involvement. In our view, this revision makes it even more clear that, going forward, coordination is necessary and appropriate with regard to potential bioterrorism incidents and decisions about medical treatment. In addition, despite the plan’s prior ambiguity, it is important to note that other federal officials—including those in DHS and HHS—told us that the NRP was applicable because of the nearly simultaneous occurrence of two incidents involving the Pentagon, a building of national military significance. Thus, according to these and other involved parties, DOD should have responded to the incidents within the context of the federal framework. GSA’s written comments clarified federal requirements related to the annual review of mail security plans. DOD, the FBI (on behalf of the Department of Justice), CDC (on behalf of HHS), and the Postal Service provided technical comments, which we incorporated, as appropriate. DHS did not provide comments. We are sending copies of this report to appropriate congressional committees and subcommittees, CDC, DHS, DOD, the FBI, GSA, HHS, the Postal Service, the Arlington and Fairfax County Offices of Emergency Management, the District of Columbia Health Department, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at [email protected] or (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix IV. To determine what occurred at the Pentagon and Skyline Complex mail facilities in Virginia, we reviewed all available timelines and after-action reports, including those prepared by various Department of Defense (DOD) components, the Postal Service, the RAND Corporation, and other federal, state, and local entities. The after-action reports and timelines document what occurred at the two sites in March 2005 as well as the sequence and timing of what occurred. We also obtained and analyzed other pertinent documentation. We developed a timeline of what occurred based on the information we obtained, and corroborated this information with agency officials, where possible. With respect to this and our other reporting objectives, we interviewed a wide range of officials from the following organizations: Office of the Secretary of Defense, Administration and Management; Office of the Assistant Secretary of Defense for Health Affairs; Office of the Assistant Secretary of Defense for Homeland Defense; DOD’s DiLorenzo TRICARE Health Clinic; DOD’s TRICARE Management Activity (TMA); DOD’s Pentagon Force Protection Agency, including personnel in the Chemical, Biological, Radiological and Nuclear laboratory; DOD’s Washington Headquarters Services; DOD’s Defense Post Office; Department of Health and Human Services; Centers for Disease Control and Prevention (CDC); Department of Homeland Security (DHS); Federal Bureau of Investigation (FBI) Headquarters and its Washington Field Office; U.S. Postal Service; District of Columbia’s Department of Health; and Arlington and Fairfax County Offices of Emergency Management. To determine what problems occurred and why they occurred, we obtained, reviewed, and analyzed, among other documents, (1) all available timelines and after-action reports prepared by federal, state, and local agencies that were involved in the response; (2) the Pentagon’s mail- screening contract and procedures; (3) TMA’s mail procedures; (4) federal mail management and other applicable regulations related to occupant emergency plans; (5) DOD requirements, including its mail manual; (6) applicable guidance on the coordination of incidents with appropriate organizations, including the National Response Plan (NRP) and its Biological Incident Annex and the National Incident Management System (NIMS) and; (7) CDC guidance related to the provision of medical services to potentially affected employees, including its guidance on the timing of antibiotics to affected individuals. We also reviewed and analyzed GAO’s internal control standards for applicable criteria and interviewed officials from the previously cited organizations as well as those from DOD’s Defense Information Systems Agency, DOD’s Military Postal Service Agency, and the General Services Administration. We compared DOD’s actions with applicable criteria, such as the Pentagon’s contract provisions and procedures, regulations and guidance, and the national coordination protocols in place at the time of the incidents, to identify any variations between the actions taken at the two facilities and the actions specified in the applicable criteria. Where variations existed, we interviewed officials from the previously mentioned organizations to determine why the applicable criteria was not followed. To determine the actions DOD has taken that address the problems that arose during the March 2005 incidents at the two mail facilities, we interviewed officials from the previously cited DOD offices as well as the Office of the Assistant Secretary of Defense for Public Affairs, Military Postal Service Agency, Joint Program Executive Office for Chemical and Biological Defense, and General Services Administration. We also interviewed DHS officials from the Science and Technology Directorate and DHS’s Mail Management Program. We obtained and analyzed pertinent information on all identified actions. For example, with respect to actions taken at the Pentagon, we reviewed the new mail-screening contract, recent interagency agreements, and the Pentagon’s draft (1) mail- screening operating procedures, (2) laboratory procedures, (3) notification procedures, and (4) procedures for communicating information to the public. For actions taken in response to the incident at the Skyline Complex, we reviewed TMA’s mail-screening procedures, DOD’s directive prohibiting the use of biosafety cabinets in certain environments, and the Skyline Complex occupant emergency plan, all of which were issued after the March 2005 incidents. To determine the extent to which the actions taken address the problems that arose at the two mail facilities during the March 2005 incidents, we reviewed and analyzed, among other things, the Pentagon’s new mail- screening contract and its draft (1) mail-screening operating procedures, (2) laboratory procedures, (3) notification procedures, and (4) procedures for communicating information to the public. To assess whether the actions appeared to resolve the problems that arose during the incidents, we compared policy and procedural changes to applicable criteria, including criteria contained in DOD’s mail manual, GSA’s regulations and guidance, CDC guidance, GAO Internal Controls Standards, the NRP’s Biological Incident Annex, and NIMS. We determined the status of key recommendations in the after-action reports and, through our analysis, identified further actions necessary to remedy the issues that arose. In addition, to provide broader perspective on issues related to detecting and responding to suspected anthrax incidents, we reviewed previous studies, congressional testimony, and other pertinent documents including those prepared by GAO. We performed our work from June 2005 to August 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Kathleen Turner (Assistant Director), David Hooper, Daniel Klabunde, Steve Martinez, Josh Ormond, Stanley Stenersen, and Johanna Wong made key contributions to this report.
|
In March 2005, two well-publicized and nearly simultaneous incidents involving the suspicion of anthrax took place in the Washington, D.C., area. The incidents occurred at Department of Defense (DOD) mail facilities at the Pentagon and at a commercial office complex (Skyline Complex). While these incidents were false alarms, DOD and other federal and local agencies responded. The Postal Service suspended operations at two of its facilities and over a thousand DOD and Postal Service employees were given antibiotics as a precaution against their possible exposure to anthrax. This report describes (1) what occurred at the Pentagon and Skyline Complex mail facilities, (2) the problems we identified in detecting and responding to the incidents, (3) the actions taken by DOD that address the problems that occurred, and (4) the extent to which DOD's actions address the problems. Events leading up to the Pentagon incident began when a laboratory that tested samples from the Pentagon's mail-screening equipment informed DOD's mail-screening contractor that test results indicated the presence of anthrax in the mail. By the time the contractor notified DOD 3 days later, suspect mail had already been released and distributed throughout the Pentagon. DOD evacuated its mail-screening and remote delivery facilities, notified federal and local agencies, and dispensed antibiotics to hundreds of employees. The Skyline Complex incident began the same day when Fairfax County, Virginia, emergency personnel responded to a 911 call placed by a Skyline employee that an alarm had sounded on a biosafety cabinet used to screen mail. Local responders closed the complex and decontaminated potentially exposed employees, and DOD dispensed antibiotics to the employees. Similarly, the Postal Service suspended operations at two facilities and dispensed antibiotics to its employees. Laboratory testing later indicated that the incidents were false alarms. Analysis of these incidents reveals numerous problems related to the detection and response to anthrax in the mail. At the Pentagon, DOD's mail-screening contractor did not follow key requirements, such as immediately notifying DOD after receiving evidence of contamination. At the Skyline Complex, DOD did not ensure that the complex had a mail security plan or that it had been reviewed, as required. The lack of a plan hampered the response. DOD also did not fully follow the federal framework--including the National Response Plan, which was developed to ensure effective, participatory decision making. Instead of coordinating with other agencies that have the lead in bioterrorism incidents, DOD unilaterally dispensed antibiotics to its employees. DOD has taken numerous actions that address problems related to the two incidents. At the Pentagon, DOD's actions included selecting a new mail-screening contractor and defining the roles and responsibilities of senior leadership, including those involved in making medical decisions. Related to Skyline, DOD prohibited its mail facilities in leased space within the Washington, D.C., area from using biosafety cabinets to screen mail unless the equipment is being operated within the context of a comprehensive mail-screening program. While DOD has made significant progress in addressing the problems that occurred, its actions do not fully resolve the issues. One remaining concern is whether DOD will adhere to the interagency coordination protocols specified in the national plan for future bioterrorism incidents involving the Pentagon. This concern arises because, more than 1 year after the incident, DOD reiterated that it has the authority to make medical decisions without collaborating or consulting with other agencies. DOD also has not ensured, among other things, that its mail facilities (1) have the required mail security plans and (2) are appropriately using biosafety cabinets for screening mail.
|
The insurance industry offers many types of coverages intended to protect businesses as well as individuals. While the extent of regulation varies by state and by line of insurance, state insurance regulators oversee the provision of insurance; for example, states may approve the rates (prices) insurers may charge and require insurers to cover certain events. In order to ensure the availability of terrorism coverage after September 11, Congress enacted TRIA to temporarily provide some property/casualty insurers some reimbursement for insured losses, including workers’ compensation losses, resulting from specific acts of terrorism. Insurance can be grouped into three main types: property/casualty, life, and health. Property/casualty insurance includes several types of insurance. Commercial property/casualty insurers cover physical losses to property, business interruptions or loss of use of buildings due to property damage, and also legal liability related to the maintenance of the property and business operations. Workers’ compensation insurance is considered a separate category of commercial property/casualty insurance, and the insurers provide employers protection against work-related disability and death. In addition, with certain exceptions, almost all employers are required to provide some form of workers’ compensation insurance to cover employer liability for workers who are killed, injured, or disabled on the job from any cause. Personal lines of property/casualty insurance include policies for homeowners and automobile coverage. Homeowners insurance provides coverage for physical losses to the home, its contents, and additional living expenses for the owner while the home is uninhabitable. Life insurers sell either individual or group policies that provide benefits to designated survivors after the death of an insured. Health insurers cover medical expenses resulting from sickness and injury. States have primary responsibility for regulating the insurance industry in the United States, and state insurance regulators coordinate their activities in part through NAIC. The degree of oversight of insurance varies by state and insurance type. In some lines of insurance, such as workers’ compensation, insurers may file insurance policy forms with state regulators, who help determine the extent of coverage provided by a policy by approving the wording of policies, including the explicit exclusions of some perils. According to an NAIC representative, while practices vary by state, state regulators generally regulate prices for personal lines of insurance and workers’ compensation policies but not for group life or commercial property/casualty policies. In most cases, state insurance regulators perform neither rate nor form review for commercial property/casualty insurance contracts because it is presumed that businesses have a better understanding of insurance contracts than the average personal lines consumer. However, reinsurers—companies that provide insurance to insurers—generally are not required to get state regulatory approval for the terms of coverage or the prices they charge. Terrorism attacks, particularly including those using NBCR weapons, could result in catastrophic losses. Each type of weapon—nuclear, biological, chemical, and radiological—represents different methods of attack. Further, many different agents can be used to carry out a biological, chemical, or radiological attack. See table 1 for general descriptions of each type of weapon and examples of available agents. The agents used to undertake NBCR attacks have differing characteristics and properties and would affect people and property in myriad ways. Of the four types of NBCR attacks, a nuclear bomb would be the most likely to result in fires of any consequence. The intense heat produced by the nuclear explosion and subsequent reactions could produce extensive fires located throughout the area of detonation. While the detonation of a dirty bomb (conventional explosives used to disperse radioactive material) could result in blast damage, the resulting fire damage likely would be confined to the immediate area. However, the detonation of both nuclear and dirty bombs would release radioactive materials, resulting in the need to decontaminate buildings and provide immediate healthcare. The distance these radioactive agents disperse from the original detonation site would depend on many factors, including the strength of the explosive and meteorological conditions. While the release of chemical and biological agents is significantly less likely to result in fires of any consequence, the agents also have the potential to contaminate buildings and make them unusable for long periods. These agents could be released within buildings or outdoors, with chemical agents more likely than biological agents to result in immediate harm to humans. All NBCR attacks, depending on size of the explosion or the quantity of the agent, have the potential to result in fatalities, injuries, or illness. After the events of September 11, when certain coverage for terrorism events disappeared, Congress passed TRIA. This law created a temporary program that effectively functions as reinsurance—for the commercial property/casualty and workers’ compensation insurance industries only. Under TRIA, the federal government would reimburse insurers in these lines for 90 percent of their losses, up to a specified level, after the insurers paid a deductible. The program also would cover losses caused by NBCR attacks, if insurers had included this coverage in an insurance policy. However, coverage of NBCR attacks, as with other terrorist attacks, would have to meet the program’s criteria to trigger reimbursements. TRIA requires certain insurers to “make available” coverage for terrorist events that the Secretary of the Treasury has certified as committed by individuals acting on behalf of a foreign person or interest. Commercial property/casualty insurers must offer terrorism coverage to their policyholders, although they could impose an additional charge. Policyholders have the option of not purchasing the coverage and adding a terrorism exclusion to their policies. If policyholders chose to purchase the terrorism coverage for their property/casualty policies, insurers then could not add any additional exclusions or conditional language to the policies. In December 2005, Congress extended many of the same provisions of the original statute in the Terrorism Risk Insurance Extension Act of 2005, but increased the required amount insurers would have to pay in the aftermath of a terrorist attack. While deliberating the extension of TRIA, Congress considered whether to cover additional lines of coverage, such as group life insurance, and set a lower deductible that applied only to NBCR terrorist events. However, Congress did not enact these changes. The TRIA extension, which continues to apply only to the commercial property/casualty industry, is set to expire on December 31, 2007. Several commonly accepted principles underlie insurers’ ability to determine whether to offer coverage for a particular risk and at what price. Ultimately, the decision becomes a question of whether sufficient information exists for insurers to accurately estimate potential losses. According to standard insurance theory, four major principles contribute to the ability of insurers to estimate and cover future losses: the law of large numbers, measurability, fortuity, and the size of the potential losses. However, measuring and predicting losses associated with NBCR risks can be particularly challenging for a number of reasons, including lack of experience with similar attacks, difficulty in predicting terrorists’ intentions, and the potentially catastrophic losses that could result. Nevertheless, models have been developed that attempt to assist insurers in evaluating terrorist and NBCR risks. However, many insurers and other insurance experts continue to believe that there is an insufficient basis for estimating the probable frequency of terrorist attacks, including NBCR attacks. Four principles generally underlie an insurance company’s willingness to provide insurance for a particular risk or type of risk. Each contributes to the insurer’s ability to measure and predict, with a reasonable degree of accuracy, the likely frequency of occurrence and the probable severity of losses that will result from each occurrence. In the ideal situation, when all these factors are satisfied, the insurer can add other expenses and profits to the expected losses and determine a price that is appropriate to the risk. Insurers may still decide to offer insurance for risks that deviate from the “ideal.” However, as one or more of the factors vary from the ideal, the ability of the insurer to estimate future losses decreases, the risk increases, and the insurer’s capital is more exposed to inadequate prices for the coverage that the insurer offers. These principles are The law of large numbers must apply. There must be a sufficiently large number of homogeneous units exposed to random losses, both historically and prospectively, to make the future losses reasonably predictable. This principle works best when there are large numbers of losses with similar characteristics spread across a large group. For example, an automobile insurer could analyze annual data on the frequency and severity (cost) of accidents and the characteristics of drivers (gender or age) involved in the accidents to predict expected losses for certain types of drivers, and thus set premiums adequate to cover these losses. The greater the experience with losses, the better the insurer would be able to estimate both the frequency and the severity of future losses, based on what happened in the past. The loss must be definite and measurable. The insurer must be capable of determining whether a loss has taken place, and setting a dollar value on the amount of the loss. The loss must be fortuitous or accidental. That is, the loss must result from chance and not be something that is certain to happen. To the extent that a future loss approaches certainty, an insurer would have to charge the full value of the loss plus an additional amount for the expenses incurred. The loss must not be catastrophic. That is, the losses should not affect a very large percentage of an insurance company’s policyholders at the same time, for example, in a limited geographic area. Alternatively, a catastrophic loss is one that is extraordinarily large relative to the amount of exposure in an insurance pool. When applied to NBCR terrorist risks, these principles can help explain why NBCR risks are so challenging for insurers. Most importantly, because so few NBCR attacks have occurred, the pool of experience in the United States is very limited, and the law of large numbers does not help insurers to measure and predict the frequency and severity of future losses. One of the reasons many insurers we interviewed had concerns about insuring NBCR risks was the small number of historical events that could be used as a basis for predicting the future frequency and severity of these risks. However, several of the academic experts we interviewed also noted that a limited number of historical events did not automatically make a risk uninsurable. For example, insurers have covered risks related to commercial satellites for which there has been a very limited record of losses; however, several of the experts noted that for commercial satellites the prospective loss is measurable and documentation is available to assess their safeguards. In addition to the inapplicability of the law of large numbers, insurers also told us that they are not able to fully measure the costs of NBCR terrorist risks (that is, the potential severity). Because certain types of NBCR attacks could have long-term or uncertain consequences, insurers may not be able to measure, or even clearly identify, their prospective losses. Almost all of the insurers we interviewed indicated uncertainties about the scope of potential losses as a factor contributing to difficulties in insuring NBCR risks. For example, according to America’s Health Insurance Plans (AHIP), a national trade association for health insurers, some NBCR attacks could produce latent illnesses such as cancer. The costs to health insurers would not be immediate but would occur in the years to come. In addition, representatives of insurers and an insurance broker with knowledge of the property/casualty market told us that they believed the costs of some NBCR events would be uncertain because of difficulties in estimating how much time would pass before buildings were successfully decontaminated (for example, after an anthrax attack) or before anyone would even be willing to enter contaminated locations (for example, after a nuclear attack) to assess the damage, making the areas unusable for a long time. Moreover, losses must be fortuitous or accidental. By definition terrorist events, including NBCR, are deliberate and do not occur by chance. As we previously reported, unlike storms and accidents, terrorism involves an adversary with deliberate intent to destroy, and the probabilities and consequences of a terrorist act are poorly understood and difficult to predict. In other words, even if an extensive history of NBCR terrorism experiences existed, without the element of randomness, it would not necessarily be indicative of the future frequency and severity of terrorist events. Likewise, according to the American Academy of Actuaries and the Insurance Information Institute, predicting terrorist risks is particularly difficult because the attacks are not random; they are intentional, and the attack characteristics are not likely to be constant, as terrorists adjust their strategies. Finally, insurance experts told us that NBCR risks could represent the potential for catastrophic (severe) losses because of the concentration of risks that could face either a particular insurer or the industry. Most of the academic experts we interviewed stressed that the potential for catastrophic losses, rather than the lack of reliable data on the frequency and severity of NBCR risks, made insurers reluctant to insure them. Several of these experts observed that in Florida, where the risk of hurricanes is both greater and more predictable than NBCR terrorism risks, insurers were leaving the state because of their exposure to catastrophic losses, perhaps more so than the unpredictability of the risk. An NBCR event, like a natural catastrophe, could result in catastrophic losses if it created significant losses to a high proportion of insureds in a particular geographic area. Should these events take place in urban areas that serve as major places of employment, they could result in tremendous exposure for lines of insurance providing coverage for workers, such as health or workers’ compensation. In addition, as explained by representatives of the New York Department of Insurance, because NBCR risks have the ability to cut across many lines of insurance in a concentrated geographic area, large insurance companies that typically cover several lines of insurance could find it very difficult to diversify their risk portfolios. According to the American Academy of Actuaries, the prospect of catastrophic losses from an NBCR event could be far larger than insurers could sustain without impairing their ability to continue providing all other insurance coverages. The prospect of catastrophic risk poses additional problems because insurers, like most businesses, generally have two major objectives. First, they expect to make a profit for their owners. Second, they plan to survive so as to operate in the future. Several of the academic experts we interviewed questioned the incentive insurers would have to insure risks, such as catastrophic NBCR attacks, that might jeopardize their financial soundness and profitability. If an insurer were faced with the potential for a catastrophic loss—that is, one that threatened its solvency or its survival—the insurer would be less likely to be willing to provide insurance, or at a minimum, the insurer would limit its exposure to the extent that it could. The larger and more uncertain the estimates of projected losses, the less likely an insurer would be willing to voluntarily insure the risk. Moreover, insurers could have another disincentive to insuring catastrophic risks for which they might not be adequately capitalized—the prospect of receiving a low rating from a rating agency. We interviewed representatives from three rating agencies, two of whom said they generally viewed NBCR risks as not insurable because of their potential for catastrophic losses. For example, a representative from one credit rating agency said that if his company considered existing potential exposure to NBCR risks when analyzing commercial property/casualty insurers, they might have to downgrade ratings due to the magnitude of potential losses. Because the frequency and severity of NBCR risks are difficult to measure, insurers have turned to techniques and processes that they have applied to other catastrophic risks. As we previously reported, insurers have come to rely on computer models and modeling firms to assist them in estimating the frequency and severity of catastrophic events and the probable losses that they might face. Since Hurricane Andrew in 1992, insurers have recognized the challenges associated with insuring low-frequency, high- cost risks such as natural disasters and increasingly have turned to the use of computer models to better estimate the expected frequency and severity of the risks. After September 11, the owners of these models developed them to estimate the effects of man-made, or terrorist, events as well. However, as noted by the Insurance Information Institute and other insurance experts, estimating the incidence of terrorism is fundamentally different and vastly more difficult than forecasting natural catastrophes, where insurers can learn much about the frequency and severity of events through historical claim data, meteorological and geological records, and increases in scientific knowledge. In view of the limited history of NBCR attacks in the United States, representatives of the modeling firms reported to us that they generally have relied on panels of terrorism experts to assess threats posed by terrorists. While these experts do not have access to current classified data, they use their judgment and expertise to assess the probability (that is, the future occurrence or frequency) of future terrorist attacks. For example, the experts assess the likelihood of terrorists targeting urban areas, based on population density and perceived importance, as well as well-known buildings within those areas. Moreover, the experts also consider the level of difficulty of using weapons of different sizes and capacities as a way of estimating the potential severity of terrorist attacks. The modeling firm representatives reported that they use a number of statistical techniques to convert the subjective opinions of their experts and the characteristics of NBCR weapons into quantified estimates of the frequency and severity of potential losses. While we did not assess the capabilities of these models, we have noted in a previous report that some federal agencies, even with access to classified data, have difficulty including in their risk assessments the relative probability of various terrorist threat scenarios. Representatives of insurers and insurance brokers also said that they generally had little confidence in the ability of models to estimate the frequency of future terrorist attacks, and the American Academy of Actuaries noted that while there has been some development of terrorism models since September 11, quantification of terrorism exposure still was extremely difficult. The Academy also noted that the probabilities associated with the occurrence of a terrorist attack have remained somewhat judgmental and a key source of uncertainty. Representatives of insurers told us that the models can be useful in simulating scenarios for particular NBCR attacks in specified locations, allowing them to estimate the potential severity of possible losses for specific events. Using available engineering, scientific, and demographic research, the models can estimate insured potential losses for the portfolios of individual insurers. Several insurers and brokers said that they found models, including those that they may have created themselves, useful in managing insurers’ exposure to terrorism risks. Finally, insurance experts and representatives of insurers and reinsurers we interviewed agreed that difficulties in predicting NBCR events, as well as the prospect for catastrophic losses, make these risks difficult to insure. However, as noted by several of the experts from academia, even though a risk may not satisfy all the principles of insurability, insureds may be able to find some amount of coverage. Several experts noted that insurability is not simply an issue of extremes—that is, either insurable or uninsurable. Rather, as specifically noted by one, the insurability of events should be viewed as a continuum, with some events such as NBCR being on the extreme end of the continuum. Insurers’ exposure to NBCR risks varies widely by line of insurance, and insurers offering coverage face challenges in pricing. In view of the underlying difficulties in insuring NBCR risks, property/casualty insurers generally have tried to exclude such events from their coverage. However, for reasons that vary somewhat by line of insurance, workers’ compensation, life, and health insurers generally offer coverage for NBCR events. Insurance industry representatives told us that most property/casualty insurers have used long-standing policy exclusions to limit coverage of NBCR events, although experience with these types of exclusions suggests that they could be challenged in court. Representatives of property/casualty insurers said that these risks continue to be unattractive to insure because of difficulties in predicting the frequency and severity of these risks, the potential for large and uncertain losses, and the limited amount of private reinsurance. Despite similar concerns and subsequent difficulties in setting prices due to lack of reliable historical data, coverage for workers’ compensation, life, and health insurance generally is available on the market. In large part, insurers provide this coverage, particularly workers’ compensation, because they are required to by states, or because the coverage (for example, life insurance) does not readily lend itself to excluding one type of risk. Nevertheless, insurance and state regulatory officials expressed particular concerns about whether the prices set for workers’ compensation insurance would cover potential losses, should a major NBCR event occur. Representatives of life and health insurers told us that generally their prices did not reflect their potential exposure to NBCR risks. Unlike workers’ compensation, life, and health insurers, insurers selling property/casualty insurance largely have excluded NBCR risks from their policies. Since Congress passed TRIA, the supply of commercial property/casualty insurance for conventional terrorism appears to have increased, yet insurance policies covering NBCR risks have remained in short supply. In its most recent survey of terrorism insurance in the commercial property/casualty industry, Treasury found that the percentage of insurers that reported that they wrote some coverage for terrorism using conventional weapons (that is, not NBCR) increased from 73 percent in 2002 to 91 percent in 2003 and 2004. In contrast, the percentage of insurers that reported covering NBCR risks in some of their policies remained about the same during that general period and was significantly smaller—about 35 percent. Moreover, as explained by Treasury officials, the 35 percent represented insurers that offered any kind of coverage for NBCR risks, meaning that an insurer would be counted as offering NBCR coverage even if it offered only one policy for one type of NBCR risk. Representatives of insurance and insurance brokerage companies also told us there was a very limited supply of NBCR coverage in the commercial property/casualty marketplace. Representatives of the three largest brokerage firms that find property/casualty insurance coverage for large commercial businesses told us that insurers offering terrorism coverage exclude NBCR risks. According to representatives of insurers, exclusions for NBCR risks are contained in policies offered by commercial property/casualty insurers underwriting in regulated insurance markets and also are contained in stand-alone terrorism insurance policies offered by specialty insurers in the nonregulated market. A representative of one of the specialty insurers with whom we spoke said the company offered very limited amounts of NBCR coverage, typically for one or two of the risks. For example, this company would offer $10 million of biological and chemical coverage for certain commercial properties, but the insurer would not provide coverage above that threshold. Representatives of insurance and insurance brokerage companies we interviewed said that even though TRIA would cover NBCR losses incurred by an insurer the same as it would any other covered terrorist losses, little coverage for NBCR risks was available because commercial property/casualty carriers largely viewed NBCR risks as uninsurable. According to representatives of two large commercial property/casualty insurers, both of whom underwrote insurance in states with localities considered at higher risk for a terrorist attack, the current structure of TRIA offered little incentive to cover NBCR losses, even though TRIA would provide coverage for some insured NBCR events. For example, they said that because their companies offered workers’ compensation insurance in areas at higher risk for terrorism, the companies were less likely to increase their level of exposure to NBCR events by also offering NBCR coverage in their commercial property and general liability policies. Under TRIA, the more business an insurer writes, the larger its deductible; and the more lines of insurance an insurer writes, the more it is exposed to losses from a catastrophic event. In addition, because of uncertainties surrounding the frequency and severity of NBCR events as well as the enormity of potential losses, representatives of insurers we interviewed said that they would have difficulty setting prices to cover such losses, even using information from the modeling firms. These representatives also expressed concerns about the potential insured losses of an NBCR event being largely undeterminable for many years after the event occurred. Such an event could have many long-term consequences— for example, the extent and duration of remediation for a contaminated building, the resulting business interruption to the policyholder, and any related litigation involved. Finally, as confirmed by representatives of the Reinsurance Association of America, private reinsurance—the risk- spreading mechanism that insurers typically use to reduce their potential losses—provided very limited amounts of coverage for NBCR risks in the property/casualty market. Property/casualty insurers long have sought to limit their exposure to certain perils, such as flood, that they consider uninsurable. Property/casualty insurers have written exclusions related to nuclear hazard risk into their standard policies for decades, generally to protect themselves from losses related to nuclear power plant accidents. Representatives of insurance companies and brokerage firms agreed that existing nuclear hazard exclusions were broad enough to likely exclude any losses resulting from nuclear and radiological events, including a terrorist attack. According to these same insurance industry representatives, property/casualty insurance contracts issued prior to September 11 did not specifically include references to losses from the terrorist release of biological and chemical agents. Rather, Insurance Services Office (ISO) officials told us that the standard contracts they provided for industry use contained language that excluded coverage for losses caused by pollution and contamination. For instance, the pollution exclusion was developed to exclude coverage for the release of many different types of substances— from asbestos to pesticides—that could cause harm to people and the environment. Representatives of some of the insurers we interviewed believed that their pollution and contamination exclusions also might allow property/casualty insurers to exclude losses caused by biological and chemical agents released by a terrorist. However, representatives of one insurance broker we interviewed suggested that pollution and contamination exclusions could be challenged in the courts if a biological or chemical event were to occur. Courts determine whether a particular substance is or is not a pollutant based upon, among other things, the language in the policy, the facts and circumstances of the case, and the law of the jurisdiction. As a result, the language of the standard pollution exclusion might be susceptible to broad interpretation by the courts. In other words, some uncertainty exists, even in the insurance industry, about how effectively the pollution and contamination exclusions would protect insurers against losses from a NBCR terrorist attack. In addition to disputes over the exclusion of NBCR risks in policies, there are other situations where the extent of property/casualty insurers’ coverage of such events could depend on judicial or other determinations. Cause of loss. According to standard nuclear exclusions, commercial policies would not cover damage caused by a nuclear blast. However, regardless of any exclusions, according to information provided by NAIC, approximately 16 states (including New York) require property/casualty insurers to cover losses from a “fire following” an event, irrespective of the cause of fire. A national security expert told us that in the case of a nuclear bomb detonation, once the property was destroyed, insurers could dispute the extent to which fire (covered in fire following states) or the blast (excluded by the nuclear exclusion) caused the damage. In other contexts, disputes over the cause of loss often have been litigated. For example, many homeowners who suffered losses in Hurricane Katrina have filed lawsuits challenging property/casualty insurers’ determinations about the cause of their losses. Certified as a terrorist act. The Congressional Budget Office (CBO) has suggested that some NBCR events might not be readily identified as terrorist acts, as defined by TRIA, and therefore coverage—both for insurers and for their policyholders—would be unclear. For example, the persons or person who in 2001 mailed letters contaminated with anthrax that killed several people and sickened many more remain(s) unknown. However, should TRIA not be renewed in 2007, this particular determination would not apply. In addition, homeowners’ insurers, part of the personal property/casualty market, have long-standing exclusions in their policies, similar to the exclusions contained in commercial property/casualty policies. According to representatives of two large homeowners’ insurers, the exclusions limited their exposure to NBCR risks. While these representatives also told us they have not excluded conventional terrorist events from their policies, they said their companies generally manage their exposure to any terrorism risks by diversifying their portfolios. The 2005 Treasury study reported that less than 3 percent of policyholders from a range of industries reported purchasing NBCR coverage in their commercial property/casualty insurance policies. Further, although purchase rates for NBCR insurance do not necessarily reflect overall demand, a major reason for not purchasing NBCR insurance given by the survey respondents was that they did not believe they were at risk for an NBCR event. While the Treasury study did not break down purchase rates for NBCR insurance by industry sector, another study of the market for terrorism insurance conducted by insurance brokers found that companies in the real estate, financial, and health care sectors had the highest rates for purchasing terrorism insurance. Although the brokerage firm study does not specifically address the demand for NBCR insurance, we consider demand for terrorism insurance generally to be a reasonable approximation of where demand for NBCR insurance might exist. For instance, demand for terrorism insurance may be strong in the real estate sector because terrorism coverage typically is required as part of a commercial business loan transaction, according to the Mortgage Bankers Association. However, representatives of the Mortgage Bankers Association also said lenders generally do not require NBCR coverage, because little or no coverage is available. In addition, a few of the academic experts we interviewed suggested that some individuals and businesses might not purchase NBCR coverage under the assumption that the federal government would cover losses from an NBCR attack, as the government agreed to do for some personal and commercial property losses resulting from Hurricane Katrina. However, several risk managers we interviewed from the hospitality and transportation industries, as well as commercial property owners, all reported a willingness to buy NBCR coverage in the private market. These managers expressed frustration at not being able to purchase insurance for NBCR risks, which they said they could do little to prevent or mitigate, particularly because an NBCR attack would be an intentional act. One of the risk managers that we interviewed noted that his company could not find enough NBCR coverage for even one building, so the company used a captive to self-insure against NBCR risks. Should an NBCR event occur, workers’ compensation, life, and health insurers would be responsible for covering loss of life and medical treatment for injuries because they generally provide coverage for these events. Following September 11, NAIC issued guidance stating its member state regulators believed terrorism exclusions were “not necessary” or were “inappropriate” for workers’ compensation, life, and health insurance policies, with exceptions limited to cases where insurers could demonstrate they would become insolvent from offering the coverage. According to an NAIC representative, regulators did not perceive exclusions as necessary because they presumed these insurers were diversifying their risks in these lines by insuring individuals across the country. Workers’ compensation insurers must cover losses from NBCR events that occur at the workplace, including related illnesses and injuries. According to multiple sources, applicable state laws generally require workers’ compensation insurers to cover nearly all perils, including those from NBCR risks. In addition, according to representatives of the National Council on Compensation Insurance (NCCI), an organization that prepares insurance rate (price) recommendations for workers’ compensation, under state workers’ compensation laws, employers are responsible for covering unlimited medical costs and a portion of lost earnings for injuries or illnesses that occur during the course of employment, regardless of the cause. While workers’ compensation insurers generally are not permitted to exclude any perils from coverage, insurer representatives advised us that any surcharges they may be permitted to charge for NBCR exposure likely would not cover potential losses. According to NCCI representatives, recognizing that workers’ compensation insurers have exposure to terrorism losses, at least 36 states, including the District of Columbia, have allowed workers’ compensation insurers to file rates that include an additional surcharge (an average of 2 cents per $100 of employee payroll) for terrorism risk. NCCI developed this statewide surcharge based on the results of a model, as a way for insurers that underwrite in states that belong to NCCI to cover potential losses from terrorism, including those from NBCR weapons. While representatives of NCCI were reasonably satisfied that the surcharges were actuarially sound, in the District of Columbia—where insurers may file the NCCI-developed terrorism surcharge—regulators did not believe that the surcharge was actuarially sound because of assumptions made in the model about localities designated to be at higher risk for terrorist events. Moreover, the willingness of state regulators that do not participate in NCCI to approve terrorism surcharges in workers’ compensation may vary. For example, we obtained information from two large states that do not participate in NCCI and have geographic areas considered at higher risk for terrorism— New York and California. In New York, regulators have allowed workers’ compensation insurers to file an additional surcharge for terrorism. However, representatives of the New York Compensation Insurance Rating Board (Rating Board) told us that the terrorism surcharge the Rating Board developed does not distinguish between conventional and NBCR risks. They did not believe that the Rating Board could justify a higher surcharge to cover NBCR risks because of the limited historical data on NBCR attacks and further, if the Rating Board did, the cost would be so high that businesses would probably find it unaffordable. In contrast, California regulators have not permitted insurers to file rates with additional surcharges specifically for terrorism, including NBCR risks. California regulatory officials told us that they would reject any terrorist or NBCR risk surcharges because they thought such rate justifications were not based on recognized actuarial methods. Representatives of workers’ compensation insurers we interviewed said that factors unique to workers’ compensation also made it difficult for them to cover NBCR risks. First, as they explained, unlike some other lines of insurance, workers’ compensation insurance covers losses beyond the expiration date of the policy. The representatives told us their most expensive claims typically came from workers who were disabled from illness or injury because they were entitled to lost wages as well as medical expenses. For NBCR events, quantifying medical expenses could be especially challenging because some illnesses or disabilities might not manifest until much later or could be difficult to trace to a workplace occurrence. For example, representatives of workers’ compensation insurers told us that in the case of smallpox, it might be difficult to determine whether the worker contracted the illness at work or elsewhere. In addition, these representatives told us that they may be further constrained in their ability to adjust their price for specific geographic risks. Representatives of NCCI told us that terrorism surcharges must be applied equally throughout a state, thus the terrorism surcharge did not reflect that employers in certain areas, such as urban areas where employees might be more concentrated, had a greater exposure to terrorist events. Despite the exposure of workers’ compensation insurers to NBCR risks, representatives of private market insurers and two public insurance funds told us that the availability of private reinsurance for these risks was limited. Therefore, they explained that they largely would rely on TRIA to cover NBCR risks. As representatives of the private insurers explained, many of their private-market reinsurance policies specifically excluded NBCR risks, or to the extent coverage was available, reinsurers offered it at prices they could not afford. Representatives of the New York State Insurance Fund told us that they had not purchased reinsurance because they viewed the high costs of reinsurance for their market as unaffordable. As a result, should a large NBCR attack occur, these representatives said that their fund might have to turn to the state to help pay claims. In contrast, representatives of the California State Compensation Insurance Fund said that they were willing to pay higher prices based on available capacity for reinsurance for NBCR risks. The American Council of Life Insurers officials, as well as representatives of life insurers we interviewed, told us they believed that most states do not allow for terrorism or NBCR exclusions in life insurance policies. In two of the states specifically included in our review—New York and California—state insurance law and implementing regulatory policy prohibited both individual and group life insurance policies from excluding NBCR or other terrorism events. On the other hand, regulatory officials from the third jurisdiction we included in our review, the District of Columbia, told us that they did not have any legal requirements that life insurers cover NBCR events and that several group life insurers recently filed policies with exclusions for NBCR risks. While group life insurers have exposure to NBCR risks, representatives of group life insurers that provide coverage nationwide told us that charging higher rates to insureds at potentially greater risk for an NBCR event would be difficult. This is because of the way the insurers typically price coverage and their inability to determine which employers would be at greater risk for an NBCR event. Life insurers price their products based on mortality tables derived from experience with prior insurance contracts and calibrated for the effects of certain individual characteristics such as a smoking habit, or group characteristics such as occupation type. However, representatives of life insurers said that these tables do not take into account a greater number of deaths that could occur as a result of a terrorist or NBCR act. Furthermore, these representatives told us that they have difficulty determining whether a particular employer or group would be more or less at risk for death from an NBCR event because they traditionally have not tracked the geographic locations of individuals covered by their policies. However, whether losses from a large NBCR attack would be catastrophic for life insurers was unclear and could depend on the extent to which their portfolios were diversified. Representatives of national life insurers told us that they have a broad portfolio of exposure nationwide, which helps them diversify their risks. In the event of a large NBCR attack in which up to one million insured people died, representatives of the American Council of Life Insurers told us that most large life insurers probably would be able to pay the death claims. However, these representatives also said that small or medium-size group life insurers that received a significantly high number of death claims following an NBCR attack might be unable to pay claims and become insolvent, and that state guarantee funds would have to levy an assessment on the remaining insurers in the states to pay the claims. Notwithstanding their belief that they could survive an NBCR attack, representatives of the two national insurers we interviewed were concerned about the companies’ exposure to catastrophic NBCR losses. These representatives particularly were concerned because their companies generally insure all the employees of a given company. These employees could be concentrated in one geographic location, and the insurance companies could be liable for huge losses if an NBCR event led to widespread casualties in one area. In addition, life insurers do not have access to TRIA. Representatives of two group life insurers we interviewed said that their companies either had not found reinsurance for NBCR risks or the costs were very high relative to the amount of insurance that could be purchased. We also spoke with representatives of a large group life reinsurer who said their company provided some coverage for NBCR events, although the company limited this exposure to $100 million per event. Although many health insurers cover groups of individuals concentrated geographically, representatives of AHIP and two national group health insurers told us that determining overall exposure to NBCR risks was challenging. Further, they explained that state regulation of NBCR coverage was not the primary reason they covered terrorism risks, and AHIP could not provide us documentation of regulatory requirements for NBCR coverage. Nevertheless, insurance regulatory officials from two states with localities at higher exposure to terrorism risks—California and New York—told us they have not allowed health insurance policies to exclude medical expenses related to illness or injury sustained from an NBCR event. In contrast, regulatory officials in the District of Columbia told us that they did not have any requirements that health insurers cover NBCR events. Representatives of two national group health insurers we interviewed described the difficulties they would have in attempting to set actuarially sound prices for health risks from NBCR terrorist events. First, representatives of health insurers said that they typically price health coverage based on experience with their insured populations and without knowing the likely impact of NBCR risks, they could not develop actuarially sound prices for such a risk. Further, the representatives explained they tend to limit policy coverage by procedure or by individual, rather than by the source of the illness. For example, a representative of one health insurer told us that while the company did develop prices for other low-frequency, high-cost claims such as liver transplants, they could only do so because of prior experiences. Second, uncertainties over the long-term health effects of NBCR attacks, such as the need for psychological counseling or cancer treatment, make it difficult for insurers either to exclude NBCR attacks from their coverage or charge additional prices for their coverage. A report from the American Academy of Actuaries, as well as representatives from AHIP, noted that harm from NBCR events could be widespread and persist for years, and in the years subsequent to the attack, it would be difficult to identify the source of the illness. According to representatives of one insurer, this also would make direct attribution of an expense to an NBCR attack difficult. Further, these representatives said that the ultimate costs of medical treatment would be unknown, as some factors such as whether hospitals would remain open and sufficient vaccines would be available, were controlled by local public health responders. Finally, similar to life insurance, representatives of one health insurer told us they often lack information about the specific geographic locations of their insured populations, further limiting their ability to conduct risk-based pricing for events such as NBCR attacks. Representatives of the health insurance industry told us private reinsurance for their coverage of catastrophic events generally was very limited. AHIP representatives told us that catastrophic reinsurance for health insurers was in short supply, expensive, and generally focused on covering large costs incurred by individuals, rather than large costs incurred by groups of individuals potentially exposed to the same risks. Representatives from health insurers also said that reinsurance was costly, but they had not specifically sought out coverage for NBCR risks. As is the case for life insurers, health insurers do not have access to TRIA. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Ranking Minority Member of the House Financial Services Committee, other interested members of Congress, and NAIC. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at 202-512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Our objectives were to discuss (1) commonly accepted principles of insurability and whether nuclear, biological, chemical, and radiological (NBCR) risks are measurable and predictable and (2) whether private insurers currently are exposed to NBCR risks and the challenges they face in pricing such risks. As part of our review, we conducted interviews in California, New Jersey, New York, Massachusetts, and Washington, D.C. We conducted our review from February 2006 through September 2006 in accordance with generally accepted government auditing standards. To identify commonly accepted principles of insurability and whether NBCR risks are measurable and predictable, we reviewed standard insurance references to identify principles that underlie insurers’ evaluations of the insurability of risks. We primarily relied upon Fundamentals of Risk and Insurance, but consulted additional references for consistency of explanation. To determine the insurability of NBCR risks, we applied these principles based on information we collected about the market for, and nature of, NBCR terrorism risks. To enhance our understanding of the market for NBCR insurance, and factors that insurers might consider when deciding whether to offer this insurance, we consulted insurance experts, including the American Academy of Actuaries, the Insurance Information Institute, and insurance experts from academia, and a crosssection of insurers representing different lines of insurance. Moreover, we obtained information about how NBCR terrorist risks are measured and predicted from three firms that specialize in modeling terrorism and other catastrophic events for insurers (modeling firms). We chose these three firms because they are among the best known in the insurance industry. Representatives of these firms provided us with and identified the types of information they incorporate into their computer models, the methods that they use to estimate the potential frequency and severity of terrorist attacks with NBCR weapons, and the reasons they believe their products are of assistance to the insurance industry. We did not evaluate the ability of the models to predict the frequency and severity of NBCR or other catastrophic risks. For additional perspective, we also obtained descriptions of the types of data available to model insured losses from natural disasters, such as hurricanes, from a modeling firm and presentations made at a catastrophe modeling conference. Finally, to obtain a broad understanding of the characteristics of NBCR weapons and the types of damage they could cause, we consulted several sources of information. We interviewed representatives from RAND, a nonprofit research organization with a focus on national security issues, and reviewed RAND publications. In addition, we interviewed representatives of the U.S. Department of Homeland Security and reviewed reports. In addition, to identify the characteristics of biological, chemical, and radiological weapons, we used information from our own reports. To assess whether insurers are exposed to NBCR events, we identified lines of insurance that could be affected in the event of an NBCR terrorist attack: life, health, workers’ compensation, commercial property/casualty, and homeowners insurance. Our information about insurer exposure in each of these lines came from multiple sources. For an overview of the market nationwide, we interviewed representatives of three of the largest commercial insurance brokers and national insurance trade organizations—the American Council of Life Insurers, representing life insurers; America’s Health Insurance Plans, representing health insurers; the Property Casualty Insurance Association of America, representing property/casualty insurers; the Reinsurance Association of America, representing reinsurance companies; and the Association of Bermuda Insurers and Reinsurers, representing off-shore specialty insurers and reinsurers. In addition, we interviewed representatives from Independent Insurance Agents and Brokers of America, an association of independent insurance agents and insurance brokers nationwide. Information from these trade associations helped provide a broader context for information we obtained from individual insurers and reinsurers and gave us some perspective on exposure for small and medium-sized insurers, insurers that we did not interview. To obtain information on specific insurers’ exposure to NBCR risks, we interviewed knowledgeable representatives of a total of 12 insurers, writing either one or several lines of insurance addressed by our study. Although the time frames of our report only permitted us to obtain information from selected insurers, we believe that these insurers were knowledgeable based on their broad exposure for their respective lines of insurance nationwide and their knowledge of markets at higher risk for terrorism. To select insurers to interview, we obtained 2004 market share data based on direct written premiums from the Insurance Information Institute and Moody’s, the most recent available data at the time of our review. Seven of the insurers we interviewed that provide coverage in the property/casualty, workers’ compensation, life, and health insurance lines held a significant portion of the insurance industry’s market share nationwide. In addition, we interviewed the state workers' compensation insurance funds for New York and California, which serve as insurers of last resort for employers that cannot find workers’ compensation coverage in the private market. Collectively, the private insurers and state funds held the following shares of the markets, by line of insurance: 16 percent of the commercial property/casualty insurance market, 34 percent of the homeowners insurance market, 38 percent of the workers’ compensation insurance market, 18 percent of the life insurance market, and 13 percent of the health insurance market. In addition, market shares for the private market insurers were among the highest in six states with localities considered by the Insurance Services Office (ISO)—a national organization that prepares insurance rate (price) recommendations and related policies for property/casualty insurers—to be at higher risk for terrorist events (including NBCR events). These insurers usually numbered among the top five insurance providers for their respective lines of insurance in these six states. Depending on the competitiveness of the state market for each insurance line, this market share generally represented anywhere from 2 to 30 percent of the local market. For commercial property/casualty insurance, we also interviewed three specialty insurers, recommended to us by insurance brokers. Specialty insurers are not regulated by state insurance departments but provide stand-alone terrorism insurance coverage that may or may not include NBCR risks. Finally, to learn more about the availability of NBCR reinsurance coverage, we interviewed representatives of three reinsurers that provide insurance for insurers in the commercial property/casualty market and the group life market, including one reinsurer that focuses its coverage on specific risks such as NBCR events. Two of the reinsurers, as measured by revenue, are among the top three reinsurers in the United States. To identify state requirements regarding NBCR coverage, we met with and received documentation from the National Association of Insurance Commissioners (NAIC) for a national regulatory perspective as well as insurance regulators in California, New York, and the District of Columbia for individual states’ regulations. We selected these states and the District of Columbia because they were among the jurisdictions that have localities considered at high risk for terrorist attacks. Representatives of NAIC were able to provide us with all of the states’ legal requirements for property/casualty insurers’ coverage of fire following events; however, NAIC did not collect information that would allow us to determine a state’s requirements for coverage of NBCR events in workers’ compensation, life, and health insurance. State regulators in California, New York, and the District of Columbia provided us with information about their requirements for NBCR coverage for life and health policies issued in their respective states. We gathered information on state workers’ compensation requirements for providing NBCR coverage and for pricing this coverage from the National Council on Compensation Insurance—representing 34 states including the District of Columbia—and from workers’ compensation rating boards and researchers in New York and California. In the time frames of our study, we could not review all of the state requirements for each of the lines of insurance included in our study. Therefore, for circumstances in which NAIC could not provide us specific state requirements, we relied on national trade associations or information provided by national insurance carriers, particularly for requirements for life and health insurance. To learn about permissible policy exclusions, we met with ISO and reviewed their standard policies (forms) for commercial property, general liability, and homeowners insurance, including terrorism endorsements. While individual insurer’s use of these forms may vary, ISO’s forms contain standard policy language. We identified language in these policies that could address issues related to NBCR events, including the nuclear hazard exclusion and the pollution exclusion. We also obtained information about factors that could affect the interpretation of ISO forms from insurers and insurance brokers. In addition, we identified examples of court cases involving disputes over language pertaining to the pollution exclusion in insurance contracts. Interviews with insurance experts and representatives of three major rating agencies provided additional perspective on insurer willingness to offer NBCR coverage. We selected insurance experts from academia based on their knowledge of insuring for catastrophes, including terrorist acts. We met with representatives of three rating agencies that provide ratings on insurers’ financial strength and abilities to meet ongoing obligations to policyholders. To learn more about supply and demand for NBCR insurance in the commercial property/casualty industry, we reviewed the U.S. Department of the Treasury’s (Treasury) 2005 “Report to Congress, Assessment: The Terrorism Risk Insurance Act of 2002” and discussed the findings with Treasury staff responsible for its contents. In this report, Treasury reports on results from a series of surveys of commercial property/casualty insurers and policyholders. One survey asked insurers whether they wrote coverage for terrorism risks and whether they wrote any policies that included coverage for any one of the NCBR risks. Another survey asked policyholders from a range of industries whether they purchased NBCR terrorism risk coverage and if not, asked them to identify the reasons. We were limited in our ability to use policyholders’ reported purchase rates for NBCR coverage as a signal for approximating overall demand because of the low response rates to these questions. Because a number of surveyed policyholders did not provide this information, there is a risk that those who did not respond differed from those who did, which could lead to bias in the survey results. To supplement Treasury’s data on demand for NBCR coverage in the commercial property/casualty insurance market, we reviewed surveys of the terrorism insurance market conducted by Marsh—a large insurance broker—in 2005 and 2006 as well as by Moody’s, a rating agency, in 2005. We also interviewed three risk managers from large companies who purchase commercial property/casualty insurance policies in the real estate, hospitality, and transportation industries, and interviewed representatives of two national associations representing a range of consumers and commercial businesses. The information from both the surveys and the interviews about the availability of NBCR coverage is limited to the specific brokerage clients and individual companies, and cannot be generalized to all policyholders in the United States. Nonresponse rates and other sources of potential error also may limit the use of data from these two surveys. Lawrence D. Cluff was the Assistant Director. In addition, Joseph A. Applebaum, Sonja J. Bensen, Katherine C. Bittinger, Carl Ramirez, Linda Rego, Barbara M. Roesmann, and Elizabeth Walat made key contributions to this report. Insurance Sector Preparedness: Insurers Appear Prepared to Recover Critical Operations Following Potential Terrorist Attacks, but Some Issues Warrant Further Review. GAO-06-85. Washington, D.C.: November 18, 2005. Catastrophe Risk: U.S. and European Approaches to Insure Natural Catastrophe and Terrorism Risks. GAO-05-199. Washington, D.C.: February 28, 2005. Terrorism Insurance: Effects of the Terrorism Risk Insurance Act of 2002. GAO-04-806T. Washington, D.C.: May 18, 2004. Terrorism Insurance: Effects of the Terrorism Risk Insurance Act of 2002. GAO-04-720T. Washington, D.C.: April 28, 2004. Terrorism Insurance: Implementation of the Terrorism Risk Insurance Act of 2002. GAO-04-307. Washington, D.C.: April 23, 2004. Catastrophe Insurance Risks: Status of Efforts to Securitize Natural Catastrophe and Terrorism Risk. GAO-03-1033. Washington, D.C.: September 24, 2003. Catastrophe Insurance Risks: The Role of Risk-Linked Securities and Factors Affecting Their Use. GAO-02-941. Washington, D.C.: September 24, 2002. Terrorism Insurance: Rising Uninsured Exposure to Attacks Heightens Potential Economic Vulnerabilities. GAO-02-472T. Washington, D.C.: February 27, 2002. Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers. GAO-02-199T. Washington, D.C.: October 24, 2001. Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers. GAO-02-175T. Washington, D.C.: October 24, 2001.
|
Terrorists using unconventional weapons, also known as nuclear, biological, chemical, or radiological (NBCR) weapons, could cause devastating losses. The Terrorism Risk Insurance Act (TRIA) of 2002, as well as the extension passed in 2005, will cover losses from a certified act of terrorism, irrespective of the weapon used, if those types of losses are included in the coverage. Because of a lack of information about the willingness of insurers to cover NBCR risks and uncertainties about the extent to which these risks can be and are being insured by private insurers across various lines of insurance, GAO was asked to study these issues. This report discusses (1) commonly accepted principles of insurability and whether NBCR risks are measurable and predictable, and (2) whether private insurers currently are exposed to NBCR risks and the challenges they face in pricing such risks. GAO collected information from and met with some of the largest insurers in each line of insurance, associations representing a broader cross section of the industry and state insurance regulators. GAO makes no recommendations in this report. Insuring NBCR risks is distinctly different from insuring other risks because of the potential for catastrophic losses, a lack of understanding or knowledge about the long-term consequences, and a lack of historical experience with NBCR attacks in the United States. Measuring and predicting NBCR risks present distinct challenges to insurers because the characteristics of the risks largely diverge from commonly accepted principles used in determining insurability. According to these common principles, when assessing insurability, the risk generally must (1) have past occurrences sufficient in number and homogeneous enough (invoking the "law of large numbers") to enable insurers to accurately predict future losses, (2) be definite and measurable in terms of dollar value, (3) occur by chance, and (4) not result in catastrophic losses for the insurer. While the condition of insurability or uninsurability is not an absolute, NBCR risks generally fail to meet most or all of these principles of an insurable risk. Indeed, insurance experts GAO interviewed said that the potential severity of NBCR risks alone could diminish the willingness of some insurers to insure NBCR risks. Although NBCR risks may not fully satisfy the principles of insurability, there are enough variations in exposure across lines of insurance that some insurers or some lines of insurance may have no willingness to offer coverage for NBCR, while others may choose to offer coverage for some or all of the risks. For example, even with TRIA, property/casualty insurers generally have attempted to limit their exposure to NBCR risks by excluding nearly all NBCR events from coverage, both for commercial property/casualty and homeowners. According to industry representatives, property/casualty insurers believe they have excluded NBCR coverage by interpreting existing exclusions in their policies to apply to NBCR risks, but some of the exclusions could be challenged in courts. Unlike property/casualty insurers, however, workers' compensation, life, and health insurers are exposed to NBCR risks and generally have not excluded them from coverage for a variety of reasons. Specifically, workers' compensation insurers generally offer NBCR coverage because many states limit the exclusion of perils for workers' compensation. Conversely, while life and health insurers may not always be required to insure NBCR risks, they generally face other challenges in segregating and excluding NBCR risks. However, representatives of workers' compensation, life, and health insurers expressed concerns that the prices they currently charge may not cover their potential exposures to NBCR risks, sometimes because of regulatory limitations, and generally because of difficulties in measuring and pricing for NBCR losses. Given the challenges faced by insurers in providing coverage for, and pricing, NBCR risks, any purely market-driven expansion of coverage is highly unlikely in the foreseeable future.
|
As a result of operations related to OIF, the Army continues to face an enormous challenge to reset its equipment. This is due to the increased usage of equipment, pace of operations, and the amount of equipment to be reset. At the onset of operations in March 2003, the Army deployed with equipment that in some cases was already more than 20 years old. As of January 2007, the Army has about 25 percent of total on-hand wheeled and tracked vehicles and about 19 percent of rotary wing aircraft deployed to the OIF/Operation Enduring Freedom (OEF) theater as shown in table 1. As we stated in our March 2006 testimony, the Army is operating this equipment at a pace well in excess of peacetime operations. The harsh operating environments in Iraq and environmental factors such as heat, sand, and dust have taken a toll on sensitive components. Troop levels and the duration of operations are also factors that affect equipment reset requirements. The Army defines reset as the repair, recapitalization, and replacement of equipment. Repairs can be made at the field level or national (depot) level. Army field level maintenance is intended to bring equipment back to the 10/20 series Technical Manual standard, is done by soldiers augmented by contractors, as required, and is usually performed at installations where the equipment is stationed. National level maintenance is work performed on equipment that exceeds field level reset capabilities. National Level maintenance may be done at Army depots, by contractors, by installation maintenance activities, or a combination of the three, and is coordinated by the Army Material Command. The Army Chief of Staff testified in June 2006 that, as of that point in time, the Army had reset over 1,920 aircraft, 14,160 tracked vehicles, and 110,800 wheeled vehicles, as well as thousands of other items. He further stated that the Army expected to have placed about 290,000 major items in reset by the end of fiscal year 2006. Recapitalization includes rebuilding of equipment which could include: extending service life, reducing operating and support costs, enhancing capability, and improving system reliability. The Army recapitalizes equipment either at Army Materiel Command depots or arsenals, the original equipment manufacturer, or a partnership of the two. Replacement includes buying new equipment to replace confirmed battle losses, washouts, obsolete equipment, and critical equipment deployed and left in theater but needed by reserve components for homeland defense/homeland security missions. Army reset funding includes ground and aviation equipment, combat losses, and prepositioned equipment. The Army funds field level and some depot level maintenance from the operation & maintenance (O&M) appropriations, while procurement appropriations fund most recapitalization and all procurement of new equipment as part of reset. The Army’s fiscal year 2007 reset execution plan includes about 46 percent O&M funding and 54 percent procurement funding. Table 2 provides a breakdown of Army equipment reset execution plans for fiscal year 2007. Under the Army’s framework for training and equipping units for deployments, known as the Army Force Generation Model (ARFORGEN), reset begins when units return from their deployments and concludes prior to a unit’s being made available for subsequent missions. Reset is intended to be a demand-based process, focused on operational requirements of the combatant commander, to rapidly return Army materiel to units preparing for subsequent operations in order to meet current and future combatant commander demands. Next-to-deploy units are identified and intended to receive first priority for distribution of equipment emerging from reset programs per the Army’s Resource Priority List. The Army’s fiscal year 2007 reset policy states that the primary driver in equipment reset operations is the rapid return of Army materiel to units preparing for subsequent operations as specified by the Army Resource Priority List (ARPL), a process that should lead to improved equipment readiness over time. To develop its fiscal year 2007 reset execution plan, the Army examined the types and quantities of equipment held by deployed units overseas and estimated what equipment they expected to return from overseas theaters to unit home stations or Army depots for reset. Depending on the required work, and whether upgrades and modernizations are planned, item-by- item determinations were made on what level of maintenance the equipment would receive as part of its reset. Due to the complexity and quantity of the maintenance required, some equipment items are automatically sent to one of the Army’s depots. For example, returning Abrams tanks and Bradley Fighting vehicles are automatically inducted into depot level reset programs due to the quantity and complexity of their reset maintenance. For each equipment item expected to return from overseas theaters for reset in a given fiscal year, the Army estimates a per unit cost of the planned reset activity, and multiplies that cost by the number of items expected to returned and be available for reset. The total Army reset funding requirement for a given fiscal year is determined by aggregating all of these costs to include all equipment expected to return from overseas theaters. The Army cannot track or report equipment reset expenditures in a way that confirms that funds appropriated for reset are expended for that purpose. In order to provide effective oversight of the Army’s implementation of its equipment reset strategies and to plan for future reset initiatives, the Congress needs to be assured that the funds appropriated for reset are used as intended. The Army, however, is unable to confirm that the $38 billion that Congress has appropriated to the Army since fiscal year 2002 for equipment reset has been obligated and expended for reset. Because equipment reset was not a separate program within the budget, it was grouped together with other equipment-related line items in the O&M and Procurement accounts. The Conference Report accompanying the Department of Defense Appropriations Act for 2007 directed the Secretary of Defense to provide periodic reports to congressional defense committees which include a detailed accounting of obligations and expenditures of appropriations provided in Title IX of the act by program and subactivity group. According to the Conference Report, the conferees have provided $17.1 billion in additional reset funding for the Army in Title IX. The Army has established a subactivity group for reset, and, according to Army officials, beginning in fiscal year 2007, the Army has begun to track reset obligations and expenditures by subactivity group. However, based on our analysis, the Army’s reset tracking system does not provide sufficient detail to provide Congress with the visibility it needs to provide effective oversight. For example, the Army’s tracking system compares what they have executed by month to their obligation plan at a macro level. Unlike the annual baseline budget requests which include details within each subactivity group, the Army’s O&M monthly reset report does not provide details of the types of equipment repaired. Likewise, the Procurement report does not itemize the types of equipment replaced or recapitalized. As a result, the Army is not in a position to tell Congress how they have expended the funds they have received to repair, replace, and recapitalize substantial amounts of damaged equipment. Because funds for reset are generally recorded in the same appropriation accounts as other funds that are included in the baseline budget, it is difficult to determine what is spent on reset and what is spent on routine equipment maintenance. In addition, because the Army has not historically tracked the execution of its reset appropriations, it does not have historical execution data. As we have previously reported, historical execution data would provide a basis for estimating future funding needs. The Congressional Budget Office has also recently testified that better estimates of future reset costs could be provided to Congress if more information was available on expenditures incurred to date. Without historical execution data, the Army must rely on assumptions and models based on its own interpretations of the definition of reset, and may be unable to submit accurate budget requests to obtain reset funding in the future. The Army cannot be assured its reset strategies will sustain equipment availability for deployed as well as non-deployed units while meeting ongoing operational requirements. The Army’s primary objective for equipment reset is to equip its deployed forces and units preparing for deployment. However, the Army’s reset strategy does not specifically target low levels of equipment on hand among units preparing for deployment. Furthermore, the Army’s reset strategies do not ensure that the repairing, replacing, and modernizing of equipment needed to support units that are preparing for deployment are giving priority over other longer-term equipment needs, such as equipment modernization in support of the Army’s modularity initiative. The Army’s reset strategies do not specifically target low levels of equipment on hand among units preparing for deployment in order to mitigate operational risk. The Army continues to be faced with increasing levels of operational risk due to low levels of equipment on hand among units preparing for deployment. According to the Army’s fiscal year 2007 framework for reset and the Army’s ARFORGEN implementation strategy, the primary goal of reset is to prepare units for deployment and to improve next-to-deploy units’ equipment on hand levels. Units preparing for deployment are intended to attain a prescribed level of equipment on hand within forty-five days prior to their mission readiness exercise, which is intended to validate the unit’s preparedness for its next deployment. However, since the Army’s reset planning process is based on resetting the equipment that will be returning to the United States in a given fiscal year, and not based on an aggregate equipment requirement to improve the equipment on hand levels of deploying units, the Army cannot be assured that its reset programs will provide sufficient equipment to train and equip deploying units for ongoing and future GWOT requirements, which may lead to increasing levels of operational risk. As of fiscal year 2007, Army officials stated they have begun to track the equipment readiness of returning units and units approaching deployment dates in an effort to assess the effectiveness of their reset efforts. To do this, Army leaders plan to examine the equipment serviceability of units that recently returned from deployed that are resetting and the equipment on hand for units preparing to deploy. However, these readiness indicators such as equipment on hand and equipment serviceability are of limited value in assessing the effectiveness of reset. For example, equipment on hand measures required levels of equipment against the unit’s primary mission which may be much different than the unit’s directed GWOT mission. In addition, a unit’s equipment serviceability ratings may be reported as acceptable, even if equipment on hand levels are very low. For example, the Army plans to induct 7,500 High Mobility, Multi-Purpose Wheeled Vehicles (HMMWV) into depot level recapitalization programs in 2007 at a cost of $455 million. The Army intends to use these HMMWVs to fill gaps in the Army’s force structure to allow units to train and perform homeland security missions. However, according to Army officials, the HMMWVs that emerge from this recapitalization program will not be suitable for use in the OIF theater because they will not be armored and, thus, will not provide protection from sniper fire and mine blasts. The unarmored M1097R1 HMMWVs will not offer the same level of force protection as the M1114 Uparmored HMMWV, and do not have the M1114’s rooftop weapons station. According to Army officials, only fully armored HMMWVs are being deployed to the OIF theater. While the Army’s HMMWV recapitalization activities may raise overall HMMWV equipment on hand levels of non-deployed units in the United States, they will not directly provide HMMWVs to equip units deploying for OIF missions, or allow them to train on vehicles similar to those they would use while deployed. According to November 2006 Army readiness data, deployed units, and units preparing for deployment report low levels of equipment on hand, as well as specific equipment item shortfalls that affect their ability to carry out their missions. Army unit commanders preparing for deployments may subjectively upgrade their unit’s overall readiness levels, which may result in masking the magnitude of equipment shortfalls. Since 2003, deploying units have continued to subjectively upgrade their overall readiness as they approach their deployment dates, despite decreasing overall readiness levels among those same units. This trend is one indicator of the increasing need for Army leaders to carefully balance short-term investments as part of reset to ensure overall readiness levels remain acceptable to sustain current global requirements. Until this is done, the Army cannot be assured that their plans will achieve the stated purpose of their reset strategy for 2007, or in future years, to restore the capability of the Army to meet current and future operational demands. The Army’s reset strategies do not ensure that the repairing, replacing, and modernizing of equipment needed to support units that are preparing for deployment are given priority over other longer-term equipment needs, such as equipment modernization in support of the Army’s modularity initiative. Army reset strategies are primarily intended to be based on plans for repairing, recapitalizing, or replacing equipment returning from overseas theaters in a given fiscal year. However, in addition to meeting these short term requirements, the Army’s reset strategy has included funding requests for certain items to accelerate achieving longer-term strategic goals under the Army’s modularity initiative. For example, in addition to the planned fiscal year 2007 national level reset of almost 500 tanks and more than 300 Bradleys expected to return from the OIF theater, the Army also intends to spend approximately $2.4 billion in fiscal year 2007 reset funds to take more than 400 Abrams tanks and more than 500 Bradley Fighting Vehicles from long-term storage or from units that have already received modernized Bradleys for depot level upgrades. These recapitalizations will allow the Army to accelerate their progress in achieving a modular force structure by providing modernized Abrams and Bradley vehicles to several major combat units 1 or 2 years ahead of schedule. The Army believes achieving these modularity milestones for Abrams tanks and Bradley Fighting Vehicles will achieve greater commonality in platforms that will enable force generation efforts and reduce overall logistical and financial requirements by reducing the number of variants that must be supported. Since fiscal year 2002, Congress has appropriated approximately $38 billion for Army equipment reset. In addition, the Army estimates that future funding requirements for equipment reset will be about $12 to $13 billion per year for the foreseeable future. To ensure that these funds are appropriately used for the purposes intended and to provide the Congress with the necessary information it needs to provide effective oversight, the Army will need to be able to track and report the obligation and expenditure of these funds at a more detailed level than they have in the past. We do not believe that the reporting format the Army developed for tracking and reporting this data for fiscal year 2007 is sufficiently detailed to provide Congress with the visibility it needs to provide effective oversight. Also, the Army’s reset strategies need to ensure that priority is given to repairing, replacing, and modernizing the equipment that is needed to equip units preparing for deployment. The current low levels of equipment on hand for units that are preparing for deployment could potentially decrease overall force readiness if equipment availability shortages are not filled prior to these units’ deployments. Lastly, as the Army moves forward with equipment reset, it will need to establish more transparent linkages among the objectives of its reset strategies, the funds requested for reset, the obligation and expenditure of appropriated reset funds, and equipment requirements and related reset priorities. Mr. Chairmen, this concludes my statement. I would be happy to answer any questions. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Continuing military operations in Iraq and Afghanistan are taking a heavy toll on the condition and readiness of the Army's equipment. Harsh combat and environmental conditions in theater over sustain periods exacerbates the wear and tear on equipment. Since fiscal year 2002, Congress has appropriated about $38 billion to the Army for the reset (repair, replacement, and modernization) of equipment that has been damaged or lost as a result of combat operations. As operations continue in Iraq and Afghanistan and the Army's equipment reset requirements increase, the potential for reset costs to significantly increase in future Department of Defense annual budgets also increases. For example, the Army estimates that it will need about $12 billion to $13 billion per year for equipment reset until operations cease, and up to two years thereafter. Today's testimony addresses (1) the extent to which the Army can track and report equipment reset expenditures in a way that confirms that funds appropriated for reset are expended for that purpose, and (2) whether the Army can be assured that its equipment reset strategies will sustain future equipment readiness for deployed as well as non-deployed units while meeting ongoing requirements. GAO's preliminary observations are based on audit work performed from November 2005 through December 2006. The Army cannot track or report equipment reset expenditures in a way that confirms that funds appropriated for reset are expended for that purpose. In order to provide effective oversight of the Army's implementation of its equipment reset strategies and to plan for future reset initiatives, the Congress needs to be assured that the funds appropriated for reset are used as intended. The Army, however, is unable to confirm that the $38 billion that Congress has appropriated to the Army since fiscal year 2002 for equipment reset has been obligated and expended for reset. Because equipment reset had not been identified as a separate program within the budget, it was grouped together with other equipment-related line items in the O&M and Procurement appropriations. With the enactment of the Fiscal Year 2007 Appropriations Act, Congress directed DOD to provide a detailed accounting of obligations and expenditures by program and subactivity group. The Army has established a subactivity group for reset, and, according to Army officials, beginning in fiscal year 2007, the Army has begun to track reset obligations and expenditures by subactivity group. However, based on our analysis, the Army's reset tracking system does not provide sufficient detail to provide Congress with the visibility it needs to provide effective oversight. The Army cannot be assured its reset strategies will sustain equipment availability for deployed as well as non-deployed units while meeting ongoing operational requirements. The Army's primary objective for equipment reset is to equip units preparing for deployment. However, the Army's reset strategy does not specifically target low levels of equipment on hand among units preparing for deployment. Although deployed Army units generally report high readiness rates, the Army continues to be faced with increasing levels of operational risk due to low levels of equipment on hand among units preparing for deployment. According to the Army's fiscal year 2007 framework for reset and the Army's Force Generation model implementation strategy, the goal of reset is to prepare units for deployment and to improve next-to-deploy unit's equipment on hand levels. However, since the Army's current reset planning process is based on resetting equipment that it expects will be returning to the United States in a given fiscal year, and not based on an aggregate equipment requirement to improve the equipment on hand levels of deploying units, the Army cannot be assured that its reset programs will provide sufficient equipment to train and equip deploying units for ongoing and future requirements for the Global War on Terrorism. The Army has recently begun to track the equipment readiness of returning units and units approaching deployment in an effort to assess the effectiveness of their reset efforts. However, these readiness indicators are of limited value in assessing the effectiveness of reset because they do not measure the equipment on hand levels against the equipment that the units actually require to accomplish their directed missions in Iraq and Afghanistan.
|
The Homeland Security Act of 2002 combined 22 federal agencies specializing in various missions under DHS. Numerous departmental offices and seven key operating components are headquartered in the NCR.components were not physically consolidated, but instead were dispersed across the NCR in accordance with their history. As of July 2014, DHS employees were located in 94 buildings and 50 locations, accounting for approximately 9 million gross square feet of government-owned and - leased office space. When DHS was formed, the headquarters functions of its various DHS began planning the consolidation of its headquarters in 2005. According to DHS, increased colocation and consolidation were critical to (1) improve mission effectiveness, (2) create a unified DHS organization, (3) increase organizational efficiency, (4) size the real estate portfolio accurately to fit the mission of DHS, and (5) reduce real estate occupancy costs. Between 2006 and 2009, DHS and GSA developed a number of capital planning documents to guide the DHS headquarters consolidation process. For example, DHS’s National Capital Region Housing Master Plan identified a requirement for approximately 4.5 million square feet of office space on a secure campus. In addition, DHS’s 2007 Consolidated Headquarters Collocation Plan summarized component functional requirements and the projected number of seats needed on- and off- campus for NCR headquarters personnel. From fiscal year 2006 through fiscal year 2014, the St. Elizabeths consolidation project had received $494.8 million through DHS appropriations and $1.1 billion through GSA appropriations, for a total of over $1.5 billion. However, from fiscal year 2009—when construction began—through the time of the fiscal year 2014 appropriation, the gap between requested and received funding was over $1.6 billion. According to DHS and GSA officials, this gap created cost escalations of over $1 billion and schedule delays of over 10 years. In our September 2014 report, we found that DHS and GSA planning for the DHS headquarters consolidation did not fully conform with leading capital decision-making practices intended to help agencies effectively plan and procure assets. Specifically, we found that DHS and GSA had not conducted a comprehensive assessment of current needs, identified capability gaps, or evaluated and prioritized alternatives that would help officials adapt consolidation plans to changing conditions and address funding issues as reflected in leading practices. DHS and GSA officials reported that they had taken some initial actions that may facilitate consolidation planning in a manner consistent with leading practices. For example, DHS has an overall goal of reducing the square footage allotted per employee across the department in accordance with current workplace standards, such as standards for telework and hoteling.and GSA officials acknowledged that new workplace standards could create a number of new development options to consider, as the new standards would allow for more staff to occupy the current space at St. Elizabeths than previously anticipated. DHS and GSA officials also reported analyzing different leasing options that could affect consolidation efforts. However, we found that the consolidation plans, which were finalized between 2006 and 2009, had not been updated to reflect these actions. GAO/AIMD-99-32 and OMB Capital Programming Guide. our September 2014 report, we recommended that DHS and GSA conduct (1) a comprehensive needs assessment and gap analysis of current and needed capabilities that takes into consideration changing conditions, and (2) an alternatives analysis that identifies the costs and benefits of leasing and construction alternatives for the remainder of the project and prioritizes options to account for funding instability. DHS and GSA concurred with these recommendations and stated that their forthcoming draft St. Elizabeths Enhanced Consolidation Plan would contain these analyses. Finally, we found that DHS had not consistently applied its major acquisition guidance for reviewing and approving the headquarters consolidation project. Specifically, we found that DHS had guidelines in place to provide senior management the opportunity to review and approve its major projects, but DHS had not consistently applied these guidelines to its efforts to work with GSA to plan and implement headquarters consolidation. DHS had designated the headquarters consolidation project as a major acquisition in some years but not in others. In 2010 and 2011, DHS identified the headquarters consolidation project as a major acquisition and included the project on DHS’s Major Acquisitions Oversight List. Thus, the project was subject to the oversight and management policies and procedures established in DHS major acquisition guidance; however, the project did not comply with major acquisition requirements as outlined by DHS guidelines. For example, we found that the project had not produced any of the required key acquisition documents requiring department-level approval, such as life-cycle cost estimates and an acquisition program baseline, among others. In 2012, the project as a whole was dropped from the list. In 2013 and 2014, DHS included the information technology (IT) acquisition portion of the project on the list, but not the entire project. DHS officials explained that they considered the St. Elizabeths project to be more of a GSA acquisition than a DHS acquisition because GSA owns the site and the majority of building construction is funded through GSA appropriations. We recognize that GSA has responsibility for managing contracts associated with the headquarters consolidation project. However, a variety of factors, including the overall cost, scope, and visibility of the project, as well as the overall importance of the project in the context of DHS’s mission, make the consolidation project a viable candidate for consideration as a major acquisition. By not consistently applying this review process to headquarters consolidation, we concluded that DHS management risked losing insight into the progress of the St. Elizabeths project, as well as how the project fits in with its overall acquisitions portfolio. Thus, in our September 2014 report, we recommended that the Secretary of Homeland Security designate the headquarters consolidation program a major acquisition, consistent with DHS acquisition policy, and apply DHS acquisition policy requirements. DHS concurred with the recommendation. In our September 2014 report, we found that DHS and GSA cost and schedule estimates for the headquarters consolidation project at St. Elizabeths did not conform or only minimally or partially conformed with leading estimating practices, and were therefore unreliable. Furthermore, we found that in some areas, the cost and schedule estimates did not fully conform with GSA guidance relevant to developing estimates. We found that DHS and GSA cost estimates for the headquarters consolidation project at St. Elizabeths did not reflect leading practices, which rendered the estimates unreliable. For example, we found that the 2013 cost estimate—the most recent available—did not include (1) a life- cycle cost analysis of the project, including the cost of repair, operations, and maintenance; (2) was not regularly updated to reflect significant changes to the program including actual costs; and (3) did not include an independent estimate to assist in tracking the budget. In addition, a sensitivity analysis had not been performed to assess the reasonableness of the cost estimate. We have previously reported that a reliable cost estimate is critical to the success of any program. Specifically, we have found that such an estimate provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction when warranted, and accountability for results. Accordingly, we concluded that DHS and GSA would benefit from maintaining current and well- documented estimates of project costs at St. Elizabeths—even if project funding is not fully secured—and these estimates should encompass the full life cycle of the program and be independently assessed. In addition, we found that the 2008 and 2013 schedule estimates did not include all activities for both the government and its contractors necessary to accomplish the project’s objectives and did not include schedule baseline documents to help measure performance as reflected in leading practices and GSA guidance. For the 2008 schedule estimate, we also found that resources (such as labor, materials, and equipment) were not accounted for and a risk assessment had not been conducted to predict a level of confidence in the project’s completion date. In addition, we found the 2013 schedule estimate was unreliable because, among other things, it was incomplete in that it did not provide details needed to understand the sequence of events, including work to be performed in fiscal years 2014 and 2015. We concluded that developing cost and schedule estimates consistent with leading practices could promote greater transparency and provide decision makers needed information about the St. Elizabeths project and the larger DHS headquarters consolidation effort. However, in commenting on our analysis of St. Elizabeths cost and schedule estimates, DHS and GSA officials said that it would be difficult or impossible to create reliable estimates that encompass the scope of the entire St. Elizabeths project. Officials said that given the complex, multiphase nature of the overall development effort, specific estimates are created for smaller individual projects, but not for the campus project as a whole. Therefore, in their view, leading estimating practices and GSA guidance cannot reasonably be applied to the high-level projections developed for the total cost and completion date of the entire St. Elizabeths project. GSA stated that the higher-level, milestone schedule currently being used to manage the program is more flexible than the detailed schedule GAO proposes, and has proven effective even with the highly variable funding provided for the project. We found in our September 2014 report, however, that this high-level schedule was not sufficiently defined to effectively manage the program. For example, our review showed that the schedule did not contain detailed schedule activities that include current government, contractor, and applicable subcontractor effort. Specifically, the activities shown in the schedule only address high-level agency square footage segments, security, utilities, landscape, and road improvements. While we understand the need to keep future effort contained in high-level planning packages, in accordance with leading practices, near-term work occurring in fiscal years 2014 and 2015 should have more detailed information. We recognize the challenges of developing reliable cost and schedule estimates for a large-scale, multiphase project like St. Elizabeths, particularly given its unstable funding history and that incorporating GAO’s cost- and schedule-estimating leading practices may involve additional costs. However, unless DHS and GSA invest in these practices, Congress risks making funding decisions and DHS and GSA management risk making resource allocation decisions without the benefit that a robust analysis of levels of risk, uncertainty, and confidence provides. As a result, in our September 2014 report, we recommended that, after revising the DHS headquarters consolidation plans, DHS and GSA develop revised cost and schedule estimates for the remaining portions of the consolidation project that conform to GSA guidance and leading practices for cost and schedule estimation, including an independent evaluation of the estimates. DHS and GSA concurred with the recommendation. In our September 2014 report, we also stated that Congress should consider making future funding for the St. Elizabeths project contingent upon DHS and GSA developing a revised headquarters consolidation plan, for the remainder of the project, that conforms with leading practices and that (1) recognizes changes in workplace standards, (2) identifies which components are to be colocated at St. Elizabeths and in leased and owned space throughout the NCR, and (3) develops and provides reliable cost and schedule estimates. Mr. Chairman and members of the Subcommittee, this concludes my prepared statement. I look forward to responding to any questions that you may have. For questions about this statement, please contact David C. Maurer, Director, Homeland Security and Justice Issues, (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include David J. Wise (Director), Adam Hoffman (Assistant Director), John Mortin (Assistant Director), Karen Richey (Assistant Director), Juana Collymore, Daniel Hoy, Tracey King, Abishek Krupanand, Jennifer Leotta, David Lutter, and Jan Montgomery. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony summarizes the information contained in GAO's September 2014 report, entitled: Federal Real Property: DHS and GSA Need to Strengthen the Management of DHS Headquarters Consolidation ( GAO-14-648 ). The Department of Homeland Security (DHS) and General Services Administration (GSA) planning for the DHS headquarters consolidation does not fully conform with leading capital decision-making practices intended to help agencies effectively plan and procure assets. DHS and GSA officials reported that they have taken some initial actions that may facilitate consolidation planning in a manner consistent with leading practices, such as adopting recent workplace standards at the department level and assessing DHS's leasing portfolio. For example, DHS has an overall goal of reducing the square footage allotted per employee across DHS in accordance with current workplace standards. Officials acknowledged that this could allow more staff to occupy less space than when the campus was initially planned in 2009. DHS and GSA officials also reported analyzing different leasing options that could affect consolidation efforts. However, consolidation plans, which were finalized between 2006 and 2009, have not been updated to reflect these changes. According to DHS and GSA officials, the funding gap between what was requested and what was received from fiscal years 2009 through 2014, was over $1.6 billion. According to these officials, this gap has escalated estimated costs by over $1 billion--from $3.3 billion to the current $4.5 billion--and delayed scheduled completion by over 10 years, from an original completion date of 2015 to the current estimate of 2026. However, DHS and GSA have not conducted a comprehensive assessment of current needs, identified capability gaps, or evaluated and prioritized alternatives to help them adapt consolidation plans to changing conditions and address funding issues as reflected in leading practices. DHS and GSA reported that they have begun to work together to consider changes to their plans, but as of August 2014, they had not announced when new plans will be issued and whether they would fully conform to leading capital decision-making practices to help plan project implementation. DHS and GSA did not follow relevant GSA guidance and GAO's leading practices when developing the cost and schedule estimates for the St. Elizabeths project, and the estimates are unreliable. For example, GAO found that the 2013 cost estimate--the most recent available--does not include a life-cycle cost analysis of the project, including the cost of operations and maintenance; was not regularly updated to reflect significant program changes, including actual costs; and does not include an independent estimate to help track the budget, as required by GSA guidance. Also, the 2008 and 2013 schedule estimates do not include all activities for the government and its contractors needed to accomplish project objectives. GAO's comparison of the cost and schedule estimates with leading practices identified the same concerns, as well as others. For example, a sensitivity analysis has not been performed to assess the reasonableness of the cost estimate. For the 2008 and 2013 schedule estimates, resources (such as labor and materials) are not accounted for and a risk assessment has not been conducted to predict a level of confidence in the project's completion date. Because DHS and GSA project cost and schedule estimates inform Congress's funding decisions and affect the agencies' abilities to effectively allocate resources, there is a risk that funding decisions and resource allocations could be made based on information that is not reliable or is out of date.
|
Increasing computer interconnectivity—most notably growth in the use of the Internet—has revolutionized the way that our government, our nation, and much of the world communicate and conduct business. While this interconnectivity offers us huge benefits, without proper safeguards it also poses significant risks to the government’s computer systems and, more importantly, to the critical operations and infrastructures they support. We reported in 2005 that while federal agencies showed improvement in addressing information security, they also continued to have significant control weaknesses in federal computer systems that put federal operations and assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at the risk of disruption. The significance of these weaknesses led us to conclude in the audit of the federal government’s fiscal year 2005 financial statements that information security was a material weakness. Our audits also identified instances of similar types of weaknesses in non-financial systems. Enacted into law on December 17, 2002, as title III of the E- Government Act of 2002, FISMA authorized and strengthened information security program, evaluation, and reporting requirements. The Act assigns specific responsibilities to agency heads, chief information officers, and IGs. It also assigns responsibilities to OMB, which include developing and overseeing the implementation of policies, principles, standards, and guidelin on information security and reviewing at least annually, and approving or disapproving, agency information security programs. Overall, FISMA requires each agency (including agencies with national security systems) to develop, document, and implement an agencywide information security program. This program should provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, this program is to include ● periodic assessments of the risk and magnitude of harm tha se, disclosure, could result from the unauthorized access, u disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively red information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system, including minimally acceptable system configuration requirements; uce subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action information security policies, procedures, and practice agency; to address any deficiencies in the procedures for detecting, reporting, and responding to security incidents; and ● plans and procedures to ensure continuity of operations for information systems that support the operations and assets of th agency. FISMA also established a requirement that each agency develop, maintain, and annually update an inventory of major information systems (including major national security systems) that are operated by the agency or under its control. This inventory is to include an identification of the interfaces between each system an all other systems or networks, including those not operated by o r under the control of the agency. Each agency is also required to have an annual independent evaluation of its information security program and practices, including control testing and compliance assessment. Evaluations of non-national security systems are to be performed by the agency IG or by an independent external auditor, while evaluations related to national security systems are to be performed only by an entity designated by the agency head. The agencies are to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of information security policies, procedures, practices, and compliance with FISMA requirements. In addition, agency heads are required to make annual reports of the results of their independent evaluations to OMB. OMB must submit a report to Congress no later than March 1 of each year on agency compliance, including a summary of the findings of agencies’ independent evaluations. Other major provisions direct that the National Institute of Standards and Technology (NIST) develop, for systems other than national security systems: (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines concerning detection and handling of information security incidents and guidelines. OMB provides instructions to the agencies and their IGs on the annual FISMA reporting requirements. OMB’s fiscal year 2005 reporting instructions, similar to the 2004 instructions, ha focus on performance measures. OMB has developed performan measures in the following areas: testing of security controls, ● agency systems and contractor systems reviewed annually, ● testing of contingency plans, incident reporting, ● annual security awareness training for employees and contractors, ● annual specialized training for ● minimally acceptable configuration requirements. Further, OMB has provided instructions for continued agency reporting on the statu action and milestones. Required for all programs an a weaknesses and show estimated resource needs or other challe to resolving them, key milestones and completion dates, and th status of corrective actions. The plans are to be submitted twice a year to OMB. In addition, agencies are to submit quarterly up dates that indicate the number of weaknesses for which corrective action has been completed as originally scheduled, or has been delayed, as well as the number of new weaknesses discovered since the last update. s of remediation efforts through plans of n IT security weakness has been found, these plans list the The annual IGs’ reports requested by OMB are to be based on the results of their independent evaluations, including work performed throughout the reporting period (such as work performed as part of the annual financial audits of the agencies). While OMB asked the IGs to respond to some of the same questions as the agencies, it al asked them to assess whether their agency had developed, implemented, and was managing an agencywide plan of actions and milestones. Further, OMB asked the IGs to assess the quality of the certification and accreditation process at their agencies, as well as the status of their agency’s inventory of major information systems. OMB did not request that the IGs validate agency responses to the performance measures. Instead, as part of their independent evaluations of a subset of agency systems, IGs were asked to assess the reliability of the data for those systems that they evaluated. In its March 2006 report to Congress on fiscal year 2005 FISMA implementation, OMB emphasized that the federal government has made progress in meeting key performance measures for IT security; however, uneven implementation of security efforts leaves weaknesses in several areas. OMB determined through its assessment of FISMA reports that advances have occurred at a governmentwide level in the following areas of IT security: ● Systems certification and accreditation. Agencies recorded a 19 percent increase in the total number of IT systems and reported that the percentage of certified and accredited systems rose from 77 percent in fiscal year 2004 to 85 percent in 2005. Moreover, OMB noted that 88 percent of systems assessed as high-risk have been certified and accredited. ● Assessed quality of the certification and accreditation process. OMB’s analysis of reports from the IGs revealed an increase in agencies with a certification process rated as “satisfactory” or higher, from 15 in 2004 to 17 in 2005. ● Plans of action and milestone process. OMB noted that out of 25 agencies that it reviewed in detail, 19 IGs report that their agencies have effective remediation processes, compared to 18 in 2004. In addition to these areas of improvement, OMB detected areas with continuing weaknesses: ● Contractor systems oversight. IGs for 6 of 24 agencies (one agency IG did not respond) rated agency oversight of contractor systems in the “rarely” range, while 3 others rated this oversight in the next lowest range, “sometimes.” ● Security controls testing. Agencies tested the security controls on a lower percentage of systems, dropping from 76 percent in fiscal year 2004 to 72 percent in 2005. OMB noted a better rate of testing for high-risk systems, with a governmentwide total of 83 percent. ● Incident reporting. OMB stated that some agencies continue to report security incidents to the Department of Homeland Security only sporadically and that others report notably low levels of incidents. ● Agencywide plans of action and milestones. While IGs for 19 agencies reported effective POA&M processes, 6 others reported ineffective processes. ● Certification and accreditation process. OMB commented that while no IG rated the certification and accreditation process for its agency as failing, eight rated the process as “poor.” The OMB report also discusses a plan of action to improve performance, assist agencies in their information security activities, and promote compliance with statutory and policy requirements. OMB has set a goal for agencies to have 90 percent of their systems certified and accredited and their certification and accreditation process rated as “satisfactory” or better by their IGs. In their FISMA-mandated reports for fiscal year 2005, the 24 major agencies reported both improvements and weaknesses in major performance indicators. The following key measures showed increased performance and/or continuing challenges: ● percentage of systems certified and accredited; ● percentage of agencies with an agencywide minimally acceptable ● percentage of agency systems reviewed annually; ● percentage of contractor systems reviewed annually; ● percentage of employees and contractors receiving annual security percentage of employees with significant security respo receiving specialized security training annually; and nsibilities ● percentage of contingency plans tested. Figure 1 illustrates that the major agencies have made steady progress in fiscal year 2005 certifying and accrediting their systems, although they have made mixed progress in meeting other key performance measures compared with the previous two fiscal years. Summaries of the results for specific measures follow. Included in OMB’s policy for federal information security is a requirement that agency management officials formally authorize their information systems to process information and, thereby accept the risk associated with their operation. This management authorization (accreditation) is to be supported by a formal technical evaluation (certification) of the management, operationa and technical controls established in an information system’s security plan. For FISMA reporting, OMB requires agencies to repo the number of systems authorized for processing after completing certification and accreditation. FISMA requires each agency to have policies and procedures that ensure compliance with minimally acceptable system configuration requirements, as determined by the agency. In fiscal year 2004, for the first time, agencies reported on the degree to which they had security configurations for specific operating systems and software applications. Our analysis of the 2005 agency FISMA reports foun d that all 24 major agencies reported that they had agencywide policies containing system configurations, an increase from the 20 agencies who reported having them in 2004. However, implementation of these requirements at the system level continues to be uneven. Specifically, 14 agencies reported having configuration policies, but they did not always implement them on their systems. FISMA periodi security policies, procedures, and practices to be performed with a frequency that depends on risk, but no less than annually. This effo is to include testing of management, operational, and technical controls of every information system identified in the FISMA- required inventory of major systems. Periodically evaluating the effectiveness of security policies and controls and acting to address any identified weaknesses are fundamental activities that allow an organization to manage its information security risks cost- effectively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. In order to measure the performance of security programs OMB requires that agencies report the number and percentage of systems that they have reviewed during the year. requires that agency information security programs include c testing and evaluation of the effectiveness of information Agencies reported a decrease in the percentage of underwent an annual review in 2005, after reporting major gains in this performance measure in 2004. In the 2005 reports, agencies stated that 84 percent of their systems had been reviewed in the last their systems that year, as compared to 96 percent in 2004. While 23 agencies repor that they had reviewed 90 percent or more of their systems in 2004, 19 agencies reported this achievement in 2005, as shown in figure 3. Under FISM information security p maintained by or on behalf of the agency and information s used or operated by an agency or by a contractor. As OMB emphasized in its fiscal year 2005 FISMA reporting guidance, agenc or use IT security programs apply to all organizations that possess federal information or that operate, use, or have access to federal information systems on behalf of a federal agency. Such other organizations may include contractors, grantees, state and local governments, and industry partners. According to longstanding OMB policy concerning sharing government information and interconnecting systems, federal security requirements continue apply, and the agency is responsible for ensuring appropriate security controls. The key performance measure of annual review of contractor systems by agencies decreased from 83 percent in 2004 to 74 percent in 2005, reducing the rate of reviews performed to below 2003 levels. However, the number of agencies that reported reviewing over 90 percent of their contractor systems has incr from 10 in 2004 to 17 in 2005. A breakdown of the percentages for fiscal year 2005 is provided in figure 4. Although agencies reported that 74 percent of their contractor systems were reviewed in 2005, they only reviewed 51 percent of the contractor systems assessed as high-risk, as opposed to 89 percent of moderate-risk systems and 84 percent of low-risk systems. Without adequate contractor review, agencies cannot be assured that federal information held and processed by contractors is secure. FISMA requires agencies to provide security awareness training. This training should inform personnel, including contractors and other users of information systems supporting the operations and assets of an agency, of information security risks associated with their activities and of the agency’s responsibilities in complying w policies and procedures designed to reduce these risks. Our studie of best practices at leading organizations have shown that such organizations took steps to ensure that personnel involved in various aspects of information security programs had the skills and knowledge they needed. Under FISMA, agencies are required to provide training in information security to personnel with significant security responsibilities. As previously noted, our study of best practices at leading organizations has shown that such organizations recognized that staff expertise needed to be updated frequently to keep security employees current on changes in threats, vulnerabilities, software, technologies, security techniques, and security monitoring tools. OMB directs agencies to report on the percentage of their employees with significant security responsibilities who have received specialized training. Agencies reported varying levels of compliance in providing specialized training to employees with significant security responsibilities. Of the 24 agencies that we reviewed, 12 reported that they had provided specialized security training for 90 percent or more of these employees. (see fig. 6). Although there was a gain of one point in the percentage of employees who received specialized security training for fiscal year 2005 (82 percent) over 2004 (81 percent), both of these years show a decrease from the level reported in 2003 (85 percent). Given the rapidly changing threats in information security, agencies need to keep their IT security employees up to date on changes in technology. Otherwise, agencies may face increased risk of security breaches. Contingency plans provide specific instructions for restoring critical systems, including such elements as arrangements for alternative processing facilities in case the usual facilities are significantly damaged or cannot be accessed due to unexpected events such as a temporary power failure, the accidental loss of files, or a major disaster. It is important that these plans be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. The testing of contingency plans is essential to determining whether the plans will function as intended in an emergency, and the frequency of plan testing will vary depending on the criticality of the entity’s operations. The most useful tests involve simulating a disaster to test overall service continuity. Such a test includes testing whether the alternative data processing site will function as intended and whether critical computer data and programs to be recovered from off-site storage will be accessible and current. In executing the plan, managers are able to identify weaknesses and make changes accordingly. Moreover, such tests assess how well employees have been trained to carry out their roles and responsibilities during a disaster. To show the status of implementing this requirement, OMB specifies that agencies report the number of systems with tested contingency plans. Overall, agencies continued to report that they have not tested a significant number of their contingency plans with only 61 percent of systems with tested plans. Although this number continues to show small increases each year since 2003, figure 7 illustrates that 5 agencies reported less than 50 percent of their systems had tested contingency plans. In addition, agencies do not appear to be appropriately prioritizing testing of contingency plans by system risk level, with high-risk systems having the lowest rate of systems with tested plans of the three risk levels. Without testing, agencies can have limited assurance that they will be able to recover mission critical applications, business processes, and information in the event of an unexpected interruption. FISMA requires that agencies develop, maintain, and annually update an inventory of major information systems operated by the agency, or under its control. The total number of agency systems is a key element in OMB’s performance measures, in that agency progress is indicated by the percentage of total systems that meet specific information security requirements. For the 2005 reports, OMB required agencies to report the number of major systems and asked the IGs about the status and accuracy of their agencies’ inventories. In 2005, agencies reported 10,261 systems, composed of 9,175 agency systems and 1,094 contractor systems. However, only 13 IGs reported that their agencies’ inventories were substantially complete. A complete inventory of major information systems is a key element of managing the agency’s IT resources, including the security of those resources. Without reliable information on agencies’ inventories, the agencies, the administration, and Congress cannot be fully assured of agencies’ progress in implementing FISMA. FISMA mandates that agencies assess the risk and magnitude of harm that could result from the unauthorized access, use, disclosure disruption, modification, or destruction of their information and information systems. The Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems, and related NIST guidance provide a common framework for categorizing systems according to risk. The framework establishes three levels of potential impact on organizational operation, assets, or individuals should a breach of security occur—high (severe or catastrophic), moderate (serious), and low (limited)—and is used to determine the impact for each of the FISMA-specified security objectives of confidentiality, integrity, and availability. Once determined, security categories are to be used in conjunction with vulnerability and threat information in assessing the risk to an organization. OMB’s fiscal year 2005 reporting instructions included the new requirement that agencies report their systems and certain performance measures using FIPS 199 risk levels. If agencies did not categorize systems, or used a method other than FIPS 199 to determine risk level, they were required to explain why in their FISMA reports. For the first time, in the 2005 reporting, agencies reported the risk levels for their agency and contractor systems, as illustrated in table 1. Agencies reported that 9 percent of their systems were not categorized by risk level. The majority of systems without risk levels assigned were found at 4 agencies. One agency did not categorize 77 percent of its systems. Without assigned risk levels, agencies cannot make risk-based decisions on the security needs of their information and information systems. There are actions that OMB and the agencies can take to improve FISMA reporting and compliance and to address underlying weaknesses in information security controls. In our July 2005 report, we evaluated the adequacy and effectiveness of agencies’ information security policies and practices and the federal government’s implementation of FISMA requirements. We recommended that the Director of OMB take actions in revising future FISMA reporting instructions to increase the usefulness of the agencies’ annual reports to oversight bodies by: ● requiring agencies to report FISMA data by risk category; ● reviewing guidance to ensure the clarity of instructions; ● requesting the IGs report on the quality of additional agency processes, such as the annual system reviews. These recommendations were designed to strengthen reporting under FISMA by encouraging more complete information on the implementation of agencies’ information security programs. Consistent with our recommendation, OMB required agencies to report certain performance measures by system risk level for the first time in fiscal year 2005. As a result, we were able to identify potential areas of concern in the agencies’ implementation of FISMA. For example, agencies do not appear to be prioritizing certain information security control activities, such as annual review of contractor systems or testing of contingency plans, based on system risk levels. For both of these activities, federal implementation of the control is lower for high-risk systems than it is for moderate or low-risk systems. OMB has also taken steps to increase the clarity of instructions in their annual guidance. It has removed several questions from prior years that could have been subject to differing interpretations by the IGs and the agencies. Those questions related to agency inventories and to plans of actions and milestones. In addition, OMB clarified reporting instructions for minimally acceptable configuration requirements. The resulting reports are more consistent and, therefore, easier to analyze and compare. However, opportunities still exist to enhance reporting on the quality of the agencies’ information security-related processes. The qualitative assessments of the certification and accreditation process and the plans of actions and milestones have greatly enhanced Congress’, OMB’s, and our understanding of the implementation of these requirements at the agencies. Additional information on the quality of agencies’ processes for annually reviewing or testing systems, for example, could improve understanding of these processes by examining whether federal guidance is applied correctly, or whether weaknesses discovered during the review or test are tracked for remediation. Extending qualitative assessments to additional agency processes could improve the information available on agency implementation of information security requirements. Agencies need to take action to implement the information security management program mandated by FISMA and use that program to address their outstanding information security weaknesses. An agencywide security program provides a framework and continuing cycle of activities for managing risk, developing security policies, assigning responsibilities, and monitoring the adequacy of the entity’s computer-related controls. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, or improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources and disproportionately high expenditures for controls over low-risk resources. As we have previously reported, none of the 24 major agencies has fully implemented agencywide information security programs as required by FISMA. Agencies often did not adequately assess risks, develop sufficient risk-based policies or procedures for information security, ensure that existing policies and procedures were implemented effectively, or monitor operations to ensure compliance and determine the effectiveness of existing controls. Moreover, as demonstrated by the 2005 FISMA reports, many agencies still do not have complete and accurate inventories of their major systems. Until agencies effectively and fully implement agencywide information security programs, federal data and systems will not be adequately safeguarded against unauthorized use, disclosure, and modification. Agencies need to take action to implement and strengthen their information security management programs. Such actions should include completing and maintaining an accurate, complete inventory of major systems, and prioritizing information security efforts based on system risk levels. Strong incident procedures are necessary to detect, report, and respond to security incidents effectively. Agencies also should implement strong remediation processes that include processes for planning, implementing, evaluating, and documenting remedial actions to address any identified information security weaknesses. Finally, agencies need to implement risk-based policies and procedures that efficiently and effectively reduce information security risks to an acceptable level. Even as federal agencies are working to implement information security management programs, they continue to have significant control weaknesses in their computer systems that threaten the integrity, reliability, and availability of federal information and systems. In addition, these weaknesses place financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. The weaknesses appear in both access controls and other information security controls defined in our audit methodology for performing information security evaluations and audits. These areas are (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) software change controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations, and (5) an agencywide security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. In the 24 major agencies’ fiscal year 2005 reporting regarding their financial systems, 6 reported information security as a material weakness and 14 reported it as a reportable condition. Our audits also identified similar weaknesses in nonfinancial systems. In our prior reports, we have made specific recommendations to the agencies to mitigate identified information security weaknesses. The IGs have also made specific recommendations as part of their information security review work. Agencies would benefit from addressing common weaknesses in access controls. As we have previously reported, the majority of the 24 major agencies had access control weaknesses. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion of the data. Based on our previous work performing information security audits, agencies can take steps to enhance the four basic areas of access controls: ● User identification and authentication. To enable a computer system to identify and differentiate users so that activities on the system can be linked to specific individuals, agencies assign unique user accounts to specific users, a process called identification. Authentication is the method or methods by which a system establishes the validity of a user’s claimed identity. Agencies need to implement strong user identification and authentication controls. ● User access rights and file permissions. The concept of “least privileged” is a basic underlying principle for security computer systems and data. It means that users are only granted those access rights and file permissions that they need to do their work. Agencies would benefit from establishing the concept of least privilege as the basis for all user rights and permissions. ● Network services and devices. Sensitive programs and information are stored on networks, which are collections of interconnected computer systems and devices that allow users to share resources. Organizations secure their networks, in part, by installing and configuring networks devices that permit authorized requests and limit services that are available. Agencies need to put in place strong controls that ensure only authorized access to their networks. ● Audit and monitoring of security-related events. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial that agencies implement system or security software that provides an audit trail that they can use to determine the source of a transaction, or to monitor the activities of users on the agencies’ systems. To detect and prevent unauthorized activity, agencies should have strong monitoring and auditing capabilities. In addition to electronic access controls, other important controls should be in place to ensure the security and reliability of an agency’s data. ● Software change controls. Counteracting identified weaknesses in software change controls would help agencies ensure that software was updated correctly and that changes to computer systems were properly approved. Software change controls ensure that only authorized and fully tested software is placed in operation. These controls -- which also limit and monitor access to powerful programs and sensitive files associated with computer operations -- are important in providing reasonable assurance that access controls are not compromised and that the system will not be impaired. These policies, procedures, and techniques help to ensure that all programs and program modifications are properly authorized, tested, and approved. Failure to implement these controls increases the risk that unauthorized programs or changes could be -- inadvertently or deliberately -- placed into operation. ● Segregation of duties. Agencies have opportunities to implement effective segregation of duties to address the weaknesses identified in this area. Segregation of duties refers to the policies, procedures, and organizational structure that help to ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby conduct unauthorized actions or gain unauthorized access to assets or records. Proper segregation of duties is achieved by dividing responsibilities among two or more individuals or organizational groups. For example, agencies need to segregate duties to ensure that individuals cannot add fictitious users to a system, assign them elevated access privileges, and perform unauthorized activities without detection. Without adequate segregation of duties, there is an increased risk that erroneous or fraudulent transactions can be processed, improper program changes implemented, and computer resources damaged or destroyed. ● Continuity of operations. The majority of agencies could benefit from having adequate continuity of operations planning. An organization must take steps to ensure that it is adequately prepared to cope with the loss of operational capabilities due to earthquake, fire, accident, sabotage, or any other disruption. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested continuity of operations plan. To ensure that the plan is complete and fully understood by all key staff, it should be tested, including surprise tests, and test plans and results documented to provide a basis for improvement. Among the aspects of continuity planning that agencies need to address should be: (1) ensuring that plans contain adequate contact information for emergency communications; (2) documenting the location of all vital records for the agencies and methods of updating those records in an emergency; (3) conducting tests, training, or exercises frequently enough to have assurance that the plan would work in an emergency. Losing the capability to process, retrieve, and protect information that is maintained electronically can significantly affect an agency’s ability to accomplish its mission. ● Physical security. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed. With inadequate physical security, there is increased risk that unauthorized individuals could gain access to sensitive computing resources and data and inadvertently or deliberately misuse or destroy them. In summary, through the continued emphasis of information security by Congress, the administration, agency management, and the accountability community, the federal government has seen improvements in its information security. However, despite the advances shown by increases in key performance measures, progress remains mixed. If information security is to continue to improve, agency management must remain committed to the implementation of FISMA and the information security management program it mandates. Only through the development of strong IT security management can the agencies address the persistent, long- standing weaknesses they face in information security controls. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the Committee may have at this time. Should you have any questions about this testimony, please contact me at (202) 512-6244. I can also be reached by e-mail at [email protected]. Individuals making key contributions to this testimony include Suzanne Lightman, Assistant Director, Larry Crosland, Joanne Fiorino, and Mary Marshall. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
For many years, GAO has reported that ineffective information security is a widespread problem that has potentially devastating consequences. In its reports to Congress since 1997, GAO has identified information security as a governmentwide high-risk issue--most recently in January 2005. Concerned with accounts of attacks on commercial systems via the Internet and reports of significant weaknesses in federal computer systems that make them vulnerable to attack, Congress passed the Federal Information Security Management Act of 2002 (FISMA), which permanently authorized and strengthened the federal information security program, evaluation, and reporting requirements established for federal agencies. This testimony discusses the federal government's progress and challenges in implementing FISMA, as reported by the Office of Management and Budget (OMB), the agencies, and the Inspectors General (IGs), and actions needed to improve FISMA reporting and address underlying information security weaknesses. In its fiscal year 2005 report to Congress, OMB discusses progress in implementing key information security requirements, but at the same time cites challenging weaknesses that remain. The report notes several governmentwide findings, such as the varying effectiveness of agencies' security remediation processes and the inconsistent quality of agencies' certification and accreditation (the process of authorizing operation of a system, including the development and implementation of risk assessments and security controls). Nevertheless, fiscal year 2005 data reported by 24 major agencies, compared with data reported for the previous 2 fiscal years, show that these agencies have made steady progress in certifying and accrediting systems, although they reported mixed progress in meeting other key statutory information security requirements. For example, agencies reported that only 61 percent of their systems had tested contingency plans, thereby reducing assurance that agencies will be able to recover from the disruption of those systems with untested plans. Federal entities can act to improve the usefulness of the annual FISMA reporting process and to mitigate underlying information security weaknesses. OMB has taken several actions to improve FISMA reporting--such as requiring agencies to provide performance information based on the relative importance or risk of the systems--and can further enhance the reliability and quality of reported information. Agencies also can take actions to fully implement their FISMA-mandated programs and address the weaknesses in their information security controls. Such actions include completing and maintaining accurate inventories of major systems, prioritizing information security efforts based on system risk levels, and strengthening controls that are to prevent, limit, and detect access to the agencies' information and information systems.
|
SSA administers the SSI program, one of the largest federal programs providing assistance to people with disabilities. The SSI program was established in 1972 under Title XVI of the Social Security Act. The program provides cash benefits to low-income individuals, including children and youth, who meet financial eligibility requirements and who are blind or have disabilities. In December 2016, SSA paid SSI benefits to over 8.25 million individuals, including more than 1.2 million under age 18. In calendar year 2015, SSA paid almost $55 billion in SSI benefits. The average monthly payment in December 2016 was $542; however, the average payment for recipients under 18 was higher ($650). According to SSA, in many states, eligibility for SSI also confers eligibility for Medicaid benefits. To be eligible for SSI on the basis of disability, an individual must meet both financial and medical requirements. To determine eligibility, SSA staff, located in more than 1,200 field offices across the country, review SSI applications and verify financial eligibility. Following SSA’s initial review, state disability determination services (DDS) offices assess applicants’ medical eligibility for SSI. SSI has two sets of medical eligibility requirements to determine disability; one for adults (individuals age 18 and older) and one for individuals under 18. To be considered disabled, individuals under age 18 must have a medically determinable physical or mental impairment (or combination of impairments) that causes marked and severe functional limitations and that has lasted or is expected to last for a continuous period of at least 12 months or result in death. For adults to be considered disabled, they must have a medically determinable physical or mental impairment (or combination of impairments) that prevents them from doing any substantial gainful activity (SGA), and that has lasted or is expected to last for a continuous period of at least 12 months or result in death. SSI recipients, including those under age 18, undergo periodic continuing disability reviews (CDR) to ensure they continue to meet medical eligibility criteria. When a DDS first finds an individual medically eligible for SSI, it also assesses the likelihood that his or her disability will improve and, depending on that likelihood, develops a schedule for future CDRs. For SSI recipients under age 18 whose impairment is considered likely to improve, federal law requires SSA to conduct a CDR at least once every 3 years. Under SSA policy, in cases in which medical improvement is not expected, a CDR is scheduled once every 5 to 7 years. During these CDRs, DDS collects evidence such as medical and educational records to determine whether the individual continues to meet medical eligibility criteria based on their disability. In addition, SSA also periodically conducts redeterminations on a sample of SSI case files to determine whether individuals continue to meet financial eligibility requirements. All SSI recipients are required to report certain information to SSA to help ensure that they continue to receive the correct benefit amounts. For example, SSI recipients must notify SSA when their earnings change and this reported information, in turn, may trigger a redetermination. Federal law also requires that SSI recipients undergo a redetermination at age 18 to evaluate whether they meet adult (rather than youth) medical eligibility criteria. At the same time, SSA assesses whether the recipient continues to meet nonmedical (financial) eligibility requirements. This redetermination generally occurs within the year after a youth turns 18. If SSA determines a recipient does not meet adult eligibility criteria, he or she receives an unfavorable redetermination and ceases to receive SSI benefits. SSA reported that in fiscal year 2015, almost 57 percent of age 18 redeterminations resulted in an initial unfavorable determination by DDS; however, approximately half of those decisions were appealed, so the final percent with unfavorable redeterminations is likely lower. Education’s VR State Grants program, authorized under the Rehabilitation Act of 1973, as amended, assists individuals with disabilities, including transition-age youth who may also be receiving SSI, find and keep employment, among other things. This program, administered by Education’s Rehabilitation Services Administration (RSA) within the Office of Special Education and Rehabilitative Services, is the largest program authorized under this Act and provided approximately $3.1 billion in formula grants to state VR agencies in fiscal year 2016. The grants support a wide range of services, individualized based upon the needs of the eligible individual with a disability, including: assessments; counseling, guidance, referrals, and job placements; vocational and other training; transportation; and, in certain circumstances, postsecondary education and training. Individuals must apply and be determined eligible by the state VR agency to receive individualized VR services. If VR agencies lack the financial and staff resources to serve all eligible individuals in the state, they are generally required to prioritize individuals with the most significant disabilities. Individuals receiving disability benefits, including transition-age youth on SSI, are generally presumed eligible for VR services and considered individuals with a significant disability. However, depending on their disability, including level of functional impairment, SSI recipients may not be considered as having a “most significant” disability by the VR agency; and, if the VR agency has implemented an order of selection, the individual may be put on a waiting list to receive services. In 2014, the Workforce Innovation and Opportunity Act (WIOA) was enacted, which, among other things, amended the Rehabilitation Act of 1973 and required VR agencies to provide additional services to students and youth with disabilities. Specifically, WIOA requires that states reserve at least 15 percent of their state’s allotment of federal VR funds for the provision of pre-employment transition services to students with disabilities. To receive pre-employment transition services, a student with a disability need only be potentially eligible for VR services and does not have to apply to VR. The Individuals with Disabilities Education Act (IDEA) provides formula grants to states, which pass them to eligible local education agencies to assist in providing special education and related services. IDEA generally requires schools to provide special education and related services to students with disabilities specified under IDEA. For each student with an IDEA-specified disability, schools must establish an individualized education program (IEP) team that generally includes the child’s teacher and other school and school district personnel, the student’s parents, the student (as appropriate), and, at the discretion of the parent or public agency, others with relevant knowledge, including related service providers. The IEP team is required to develop a written IEP for each student that includes, among other information, a statement of the student’s academic achievement and functional performance, measurable academic and functional goals, and the special education and related services to be provided. Also under IDEA, beginning with the first IEP to be in effect when a student turns 16, or earlier if determined appropriate by the IEP team, and updated annually thereafter, the IEP must include, among other things, appropriate, measurable postsecondary goals for training, education, employment, and, where appropriate, independent living skills, and transition services needed to help the student reach these goals. Schools may invite representatives from VR agencies to these transition planning meetings. Students with disabilities who do not qualify for special education and an IEP under IDEA may qualify for services under section 504 of the Rehabilitation Act of 1973, as amended (section 504). Education’s section 504 regulations require school districts to provide qualified students regular or special education and related aids and services designed to meet the educational needs of students with disabilities as adequately as the needs of students without disabilities are met. Transition services may be documented in a 504 plan, but they are not a requirement of such plans. SSA’s role in encouraging employment for transition-age youth on SSI as they move into adulthood is focused on administering work incentives and other employment supports that allow them to keep at least some of their benefits even if they have earnings. However, very few youth on SSI benefit from these supports. As a provider of means-tested transfer payments, SSA does not provide direct employment services to SSI recipients, including youth on SSI. However, for recipients who want to work, the SSI program is designed to support their efforts and reduce their reliance on benefits, according to SSA’s Annual Report of the Supplemental Security Income Program. Federal law provides several work incentives and other employment supports that help SSI recipients—including youth—to enter, re-enter, or stay in the workforce. Most transition-age youth are also students, and the importance of education is emphasized by the primary work incentive for this population, the Student Earned Income Exclusion (SEIE), which encourages work, but requires recipients to attend school to be eligible for the exclusion. SSA also administers other work incentives and employment supports that are available, but not targeted to transition-age youth. See table 1 for a list of key work incentives and employment supports available to this population. Few transition-age youth on SSI benefit from SEIE—the only SSA- administered work incentive targeted specifically to younger SSI recipients. SEIE allows SSI recipients under age 22 who also regularly attend school, college, university, or vocational or technical training to exclude a portion of their earnings—$1,790 a month, up to $7,200 a year, in 2017—from their countable income for the purposes of determining SSI eligibility and benefit amounts. Based on data provided by SSA, we found that 1.3 percent and 1.4 percent of all transition-age youth (ages 14 to 17) on SSI had income excluded under SEIE in calendar years 2012 and 2013, respectively. Our analysis of SSA data further suggests the possibility that some youth who may be eligible may not be benefiting from this SSI provision. SSA data show that few transition-age youth benefit from SEIE, in part, because few have earned income. For example, in 2012, 3.3 percent (15,234 out of 455,363) of transition-age youth reported earned income. However, our analysis of SSA data found that even among those transition-age youth with earned income, most often less than half benefited from SEIE. The percentage of youth with earnings who benefited from SEIE in 2012 varied by age and month, but ranged from a low of 28 percent to a high of 53 percent. (See fig. 1.) These percentages were similar in 2013 through 2015. Given that SEIE should be applied automatically for all eligible students who have reported earned income, SSA officials offered the following possible reasons, other than SSA user or system error, for why youth with earnings might not have benefited from SEIE: they were not students or they did not report their student status to SSA. However, previous research found more than 94 percent of transition-age youth on SSI reported being enrolled in school. Further, although some youth on SSI may not report their student status, SSA policy instructs staff to develop and verify school attendance for youth under 18 who report that they expect to earn over $65 in a month. SSA also has procedures for capturing an individual’s student status during his or her initial application and during a redetermination. Despite these procedures, the fact that many youth with earnings are not receiving SEIE suggests that SSA may not be confirming student status or applying SEIE in a timely manner or in accordance with policy. SSA officials told us that the agency does not regularly analyze SEIE data and said they do not believe doing so would help them better understand SEIE’s effectiveness or reach. However, our recent data request uncovered potential undercounting of earnings and SEIE use. Federal standards for internal control state that an agency should identify, analyze, and respond to risks related to achieving its objectives. Absent this analysis, SSA cannot know the extent to which various factors may contribute to the low percentage of transition-age youth with earnings receiving SEIE, or whether errors made by staff or data system errors are precluding some SSI recipients from receiving an income exclusion for which they are eligible. Similarly, the number of transition-age youth on SSI who benefited from other SSA-administered work incentives and employment supports was either unknown or low. For example, SSI’s Earned Income Exclusion, which excludes the first $65 of income earned each month from benefit calculations and half of earnings after that, is available to the broadest set of SSI recipients with earnings. However, SSA officials told us that SSA has not conducted analysis to determine the extent to which transition- age youth on SSI benefit from this incentive. SSA officials told us that their systems automatically apply this exclusion to any earned income remaining after the SEIE has been applied and that any individual with earned income (whether or not the SEIE applies) automatically receives this exclusion. SSA data for Impairment-Related Work Expenses (IRWE), Blind Work Expenses (BWE), and the Plan to Achieve Self-Support (PASS) show low uptake by transition-age youth on SSI for each of these provisions as well. For example, SSA data show that no transition-age SSI recipients benefited from IRWE or BWE and no more than five had a PASS in any calendar year 2012 through 2015, the most recent data available. SSA staff at three field offices told us that, in their view, use of PASS by transition-age youth may be low because it is complex and has many requirements, including that the recipient develop long-term career goals. Another provision of federal law—referred to as the section 301 provision—allows certain individuals to continue receiving SSI benefits even when SSA determines through an age 18 redetermination or CDR that they no longer have a medical disability. For example, recipients may retain their benefits if they are 18 to 21 years old and are receiving special education and related services through an IEP, or if they have a PASS, or if they are enrolled in VR or a similar program and meet other requirements. The possibility of these continued payments underscores the importance of ensuring transition-age youth are aware of the section 301 provision and of the services that qualify them for it, such as IEPs, PASS, and VR. In 2015, the most recent year for which data are available, about 1,200 adults ages 18 and 19 benefited from this provision. SSA officials told us that this provision has not been widely used because eligibility through an IEP only applies to individuals ages 18 through 21, and because few youth under 18 were likely served by VR agencies. However, SSA officials were unable to provide data on the number of individuals who applied for and ultimately did not benefit because the agency does not maintain these data in a format that would allow for this type of analysis. Lastly, other legal provisions may encourage work by allowing SSI recipients to maintain SSI benefits or Medicaid even if they earn over SGA, in certain circumstances, but the number of transition-age youth who benefit from these provisions is not known. Specifically, SSA does not analyze data to determine the extent to which transition-age youth may maintain SSI benefits or Medicaid under these provisions. Because youth typically do not exceed SGA, the number affected is likely small, according to SSA officials. SSA has been involved in two initiatives to test ways to encourage employment of transition-age youth. The SSA-sponsored Youth Transition Demonstration (YTD) did not result in changes to SSA’s work provisions, according to SSA officials. The Promoting the Readiness of Minors in Supplemental Security Income (PROMISE) initiative, led by Education, is ongoing. SSA’s YTD targeted individuals ages 14 to 25 at six demonstration sites who received or were likely to receive SSI or Social Security Disability Insurance (SSDI) benefits. YTD tested the impact of various waivers of SSA-administered work incentive rules in combination with a range of strategies and work supports on employment, income, and other outcomes. According to the YTD final evaluation in 2014, all six site locations were required to include certain program components, such as work-based experiences; benefits counseling; family supports, including transition-related information; and connections to service providers, including health care and transportation services. The sites had flexibility in the approaches they used to implement those program components. The final evaluation showed inconsistent results and SSA was unable to determine whether waivers contributed to positive outcomes at some sites. On the positive side, site-specific interim evaluations showed that, after 1 year, YTD increased participants’ use of benefits and incentive counseling, and their awareness of at least some work incentives, at all six sites. YTD also increased participants’ understanding of the effects of work on SSI benefits, medical coverage, or both, at three sites. However, the final 2014 YTD evaluation report found mixed results. For example, two of the six sites showed positive impacts on employment, two sites showed positive impacts on annual earnings, and two sites showed positive impacts on participation in productive activities, such as education, employment, or training. However, none of the sites saw an improvement in these 14- to 25-year olds’ self-determination and one site saw an increase in delinquency. Moreover, SSA officials told us that the final evaluation could not determine the extent to which changes to work incentives had led to any of the positive effects experienced at some sites. SSA officials said that because all YTD participants were eligible for work incentive waivers and other services, isolating the effects of changes to work incentives from the effects of the other services was not possible. SSA officials also said they have not made program changes based on YTD results, but that YTD informed the development of the PROMISE initiative. Officials said SSA is conducting an internal study of YTD to assess longer term outcomes, which they hope to complete by the end of 2017. SSA is currently a partner in the ongoing PROMISE initiative, which is being led by Education. Through PROMISE, Education provided funds to selected states to design and implement demonstration projects to improve outcomes for youth on SSI and their families. PROMISE targets transition-age youth who are 14, 15, and 16 years old and receiving SSI and their families with interventions including vocational rehabilitation, case management, benefits counseling, financial literacy training, career and work-based learning experiences, and parent training and information. SSA provided data on youth receiving SSI to the PROMISE demonstration projects for enrollment purposes, is funding the PROMISE research evaluation of the demonstration, and, according to SSA officials, is providing technical assistance regarding SSA policies to project sites. SSA does not have a role in direct delivery of services. Education awarded six 5-year PROMISE grants in 2013, with most projects beginning services within the first year. An interim impact report is scheduled for release in summer 2018, and a long-term evaluation is scheduled for winter 2022. During our New York site visit, we spoke with staff from organizations providing services under the PROMISE initiative who told us that, although they did not yet have outcome data, their early observations suggest positive effects, such as that youth are engaged, families are interested in having their children work and in receiving services to encourage work, and PROMISE is creating a more collaborative environment among service providers, VR, and schools. According to SSA officials, VR staff, and other stakeholders with whom we spoke, transition-age youth and their families are often unaware of or do not understand SSA-administered work incentives and supports, and may fear that working will negatively affect their SSI or Medicaid benefits. Although we were unable to identify recent research or data corroborating these perspectives, a 2007 study using data collected in 2001 and 2002, found that only 22 percent SSI recipients ages 14 to 17 knew about the work incentives or discussed them with an SSA representative. Experts believe this lack of knowledge and associated concerns about the effect of work on benefits may reduce work attempts by transition-age youth. For example, in a planning report for YTD, the research organization Mathematica stated that “lack of knowledge about how work experiences, benefits, and SSA incentives interact leads to low utilization of the incentives among beneficiaries.” Similarly, staff in SSA field offices and state VR agencies, researchers, and others we spoke to said fear of losing health care or SSI benefits creates a barrier to employment for transition-age youth and some said that families may not encourage youth on SSI to work because of these fears. SSA officials also said that some families believe they are helping their children by preventing them from working because it will enable them to keep benefits longer or reduce the chance of an unfavorable age 18 redetermination. Despite such gaps in knowledge or understanding of work incentives and the age 18 redetermination process among youth on SSI and their families, and contrary to general SSA policy, SSA staff may not be systematically conveying information about these topics during CDRs and other interviews. SSA policy states that interviewers are responsible for providing accurate and meaningful information about the SSI program, and for making the process of applying for and maintaining eligibility as understandable and uncomplicated as possible. SSA policy also states that recipients may not know the right questions to ask to obtain the information they need to make informed decisions. SSA officials we interviewed said, consistent with SSA policy, field staff collecting information during a CDR, financial eligibility redetermination, or age 18 redetermination, would discuss work incentives with recipients. However, SSA field office staff we interviewed did not confirm such information sharing consistently occurs in SSA field offices. Staff said such conversations may not occur for a variety of reasons. For example, they said youth and their families do not generally seek out information on work incentives, and staff may not have time for such discussions or be experts on work incentives. SSA may also be missing opportunities to allay certain fears about how work might affect age 18 redeterminations. SSA policy indicates, and officials from the Office of Disability Determinations (ODD) confirmed, that although information on work history is collected and may be considered when determining whether a person meets medical criteria, it only influences financial eligibility when specific conditions are met. According to SSA officials, earnings are only considered when determining capacity to work if an individual has worked at or above SGA for a period long enough to gain work skills necessary for the job, and that such instances are rare. SSA officials told us that virtually all unfavorable age 18 redeterminations result from a medical evaluation, not work history. The medical evaluation takes into account the medical evidence and, if needed, the physical and mental functional capacity of the individual. SSA officials could not provide specific data on the relationship of prior work to redetermination outcomes. However, they said few youth undergoing an age 18 redetermination have a work history, and among those who do, a very small number have worked sufficiently above SGA to result in an unfavorable redetermination on that basis. At the same time, SSA’s policies do not instruct staff to consistently convey information explaining how work may or may not affect the age 18 redetermination when speaking with youth and their families. Federal standards for internal control state that an agency should communicate the information necessary to achieve its objectives. Without standard procedures and language to guide SSA representatives and ensure they regularly and consistently discuss how work incentives can allow transition-age youth on SSI to work without jeopardizing their benefits, SSA may miss opportunities to allay misplaced fears, encourage work, and potentially reduce future dependence on benefits. As part of its effort to increase awareness and understanding of work incentives and supports available to SSI recipients, SSA funds Work Incentives Planning and Assistance (WIPA) projects in every state, and relies on the projects to provide benefits information and counseling to transition-age youth. The projects typically provide general information on benefits or work supports and referrals to additional services, as needed, and individualized counseling on benefits and related services. WIPA projects are supposed to conduct outreach and provide information to disabled individuals, including transition-age youth, including advising them of work incentives available to them. However, the WIPA projects’ reach is limited. According to data from SSA’s contracted technical assistance provider for WIPA projects, WIPA projects served just 345 youth ages 14 to 17 on SSI between July 2013 and June 2016. While staff at one of the two WIPA project locations we visited said that they work with youth under age 18 to the extent possible, they said schools typically assist this age group rather than WIPA projects. At the other location, WIPA project staff told us that most of their clients are adults because beneficiaries typically do not seek services until they are working. WIPA project staff said funding constraints limit their ability to serve everyone and to conduct outreach. When the WIPA program was established in 1999, the law put a limit of $23 million on the amount of grants, cooperative agreements, and contracts that can be awarded each fiscal year under this program, and the limit has not changed since then. More recently, SSA has taken additional steps to provide written information about work incentives and supports to transition-age youth on SSI and their families by developing a new brochure; however, this brochure—while helpful for some—may not be sufficient to allay fear of work affecting benefits. SSA officials told us that lessons learned from YTD influenced the agency’s development of the brochure to inform youth, their families, and service providers about the age 18 redetermination process and available resources. While the new brochure is a positive step, it does not contain key information that could help alleviate fear that work will mean losing benefits, such as how work is considered during the age 18 redetermination process or the circumstances under which youth can work and maintain Medicaid coverage. In addition, some stakeholders, including WIPA project and VR staff, told us that written material, such as a brochure, may not be sufficient to convey complex information on work incentives and how working affects benefits. For example, although SSA officials said one purpose of the brochure is to increase awareness of available resources, staff from WIPA’s technical assistance provider told us in December 2016 that they did not believe the brochure, mailed in August 2016, had led to an increase in youth seeking WIPA project services. Federal standards for internal control state that an agency should communicate quality information needed to achieve its objectives with external parties, and that this information should be communicated using appropriate methods that consider factors including the intended audience and the nature of the information, among others. For example, results from YTD suggest that increased benefits counseling was associated with increased awareness of work incentives. Having access to individualized training and employment services provided by VR agencies helps transition-age youth on SSI develop the skills they need to transition to adulthood and the workforce. However, SSA does not have a systematic way to connect these youth to state VR agencies that provide employment-oriented and other services to individuals with disabilities. To achieve successful transition to adulthood, it is important that transition-age youth with disabilities receive transition planning and employment-related services that help them prepare for and engage in gainful employment to the extent of their capabilities. Such services are provided by state VR agencies under the VR State Grants program, which is administered by Education’s Rehabilitation Services Administration, and may include individualized: assessments; counseling, guidance, referrals, and job placements; vocational and other training; transportation; and, in certain circumstances, postsecondary education and training. Participation in VR also may allow transition-age youth who would otherwise lose SSI benefits due to an unfavorable CDR or age 18 redetermination to continue receiving SSI payments. Despite the advantages of participating in VR programs, our review of data from five state VR agencies found few transition-age youth (ages 14 to 17) receiving SSI who had open VR service records in calendar year 2015. Specifically, in four of the five states, the percentage of transition-age youth ages 14 to 17 on SSI with open VR service records was less than 1 percent. In the fifth state, approximately 3 percent of such youth had an open VR service record. Although there may be many reasons for low VR participation by transition-age SSI recipients at the five state VR agencies we spoke with, SSA’s stated inability to directly refer these youth to VR agencies does not help to improve participation rates. Prior to the enactment of the Ticket to Work and Work Incentives Improvement Act of 1999, SSA was required to consider each claimant’s need for vocational rehabilitation and refer SSI recipients ages 16-64 to state VR agencies. While the enactment of this Act expanded the pool of employment service providers available to recipients of SSI and other disability benefits, SSA limited the Ticket to Work and Self-Sufficiency (Ticket to Work) program to adults. In addition, the Act removed the language that had required SSA to make direct referrals of benefit recipients to VR providers. SSA has interpreted this legal change as “eliminat SSA’s authority to refer recipients for vocational rehabilitation (VR) services” in states in which the Ticket to Work program has been implemented. Because the Ticket to Work program has been implemented in all states, it is SSA’s view that it is prohibited from directly referring adults and youth on SSI to VR services. Because SSA states that it can no longer makes direct referrals to VR agencies for services and the Ticket to Work program only supports adults, SSA lacks a mechanism to help connect youth on SSI to VR services. In contrast, the Ticket to Work program provides SSA a well- developed structure to connect SSI recipients age 18 and older to VR agencies or other employment networks. Specifically, SSI recipients ages 18 to 64 are issued a ticket that can be used to obtain vocational rehabilitation, employment, or other support services from a state VR agency or an approved employment network of their choice. SSA has a Helpline number for the Ticket to Work program, and the program has its own website containing information on its benefits and how to access VR agencies and other service providers. Since SSI recipients under age 18 are not eligible for tickets, SSA has no structure in place to ensure transition-age youth are made aware of and encouraged to take advantage of available employment programs that can help reduce their reliance on benefits as they transition into adulthood. In February 2016, SSA issued an advanced notice of proposed rulemaking in which it solicited public input on how to improve the Ticket to Work program, including how the program could encourage youth to pursue work-related opportunities. SSA officials said that they have no timeline for further actions related to this notice at this time. In addition, an SSA official said they are in the early stages of considering a new initiative under SSA’s demonstration authority that would test whether having a state DDS make direct referrals of age 18 redetermination cases to VR agencies would result in increased VR services, increased employment outcomes, and reduced dependency on SSI. This initiative would involve the DDS and VR agencies located in one state. The SSA official said they are still working out the details, but said that if the project is feasible it would ideally begin sometime in 2017 and last at least 1 to 2 years. Without direct referrals from SSA and access to the Ticket to Work program, the primary way that youth on SSI are connected to VR is through their schools. While referrals to VR agencies can come from other sources, national Education data show elementary and secondary schools are the primary source of referrals for transition-age youth (see fig. 2). Specifically, for transition-age youth with VR service records that were closed in fiscal year 2015, over 80 percent had been referred to VR agencies by elementary and secondary schools. Students with disabilities can be connected to VR agencies through transition services provided by their schools as required under IDEA. IDEA requires states and school districts to identify, locate, and evaluate children suspected of having a disability, as defined in IDEA, and who are in need of special education and related services. For students age 16, or younger if determined appropriate by the IEP team, schools must develop and implement an IEP that incorporates postsecondary goals and provides access to appropriate transition services. The transition planning process develops a student’s postsecondary goals for training, education, and employment, among other things. Although IDEA does not specifically require school districts to include VR agencies in transition planning, Education’s regulations require school districts to invite a representative from a VR agency or from other agencies likely to be providing transition services to a student’s IEP team meetings, when appropriate, and with the prior consent of the parents or student who has reached the age of majority. According to Education guidance, VR agency involvement during the transition planning phase of an IEP helps provide a bridge to VR services for eligible students preparing for life after school. Rehabilitation Services Administration officials and a few VR agencies told us that staff from VR agencies are invited to transition planning meetings by schools, but due to capacity constraints, these staff may not attend all such meetings. Furthermore, the relationship between VR agencies and schools and school districts varies, based on interviews we had with school and school district officials in the two states we visited. For example, school officials in two districts said that the VR agency staff are more involved and meet with students weekly, and as a result, students with IEPs are more likely to be connected to VR services. Some officials in the other school districts we visited described the relationship more as a “hand-off” of the student from the school to the VR agency. For example, VR agency staff would typically meet with students approaching graduation. Regardless of the relationship, a student makes the choice to apply or not to apply for VR services. While VR and SSA officials we interviewed said that most students on SSI have IEPs, and that their IEPs can help connect them to VR services, we found that neither schools nor SSA collect or analyze data that would allow them to determine the extent to which youth on SSI do, in fact, have IEPs that would help them connect with VR services. Although schools document information on students with IEPs, school officials told us they do not collect information on whether students are receiving SSI benefits, and they have no systematic way to obtain this information. SSA maintains data on whether a student is receiving SSI and collects information on whether a recipient is receiving special education, or has an IEP, in certain cases. However, SSA officials said they do not analyze these data to determine how many youth on SSI have IEPs that could facilitate connecting them to VR services. Based on data collected in 2001-2002 for the National Survey of SSI Children and Families, one study found that approximately 70 percent of surveyed youth on SSI had an IEP at some point during their schooling, which seems to indicate that about one-third of these youth lack an established path that may connect them to services to help them transition into employment or postsecondary education, and potentially reduce their dependence on SSI. In addition, youth who have dropped out of school also lack an IEP pathway to VR services. Federal standards for internal control state that agencies should use quality information to achieve their objectives. Absent data on whether youth on SSI have IEPs that can help them connect to transition services or additional options for connecting youth to VR services, SSA cannot ensure that these youth are receiving or have access to services they need to help them prepare for adulthood and the workforce. With enactment of WIOA in 2014, more transition-age SSI recipients are potentially connected to VR services, although the extent to which this occurs is not known. Under WIOA’s amendments to the Rehabilitation Act of 1973, VR agencies are now required to provide pre-employment transition services to students with disabilities beginning at age 16 (or younger if a state elects a younger minimum age). WIOA’s amendments to the Rehabilitation Act of 1973 also require states to reserve at least 15 percent of their state allotment of federal VR funds to provide these services to students, including job exploration counseling, work-based learning experiences, transition or postsecondary educational programs counseling, workplace readiness training, and instruction in self- advocacy. Students with disabilities are eligible for pre-employment transition services even if they have not applied or been found eligible for VR’s regular services. Most VR agencies we interviewed are in the early stages of implementing WIOA’s amendments to the Rehabilitation Act of 1973, and as such, the extent to which the new provisions will increase participation by youth on SSI is not known. While certain WIOA amendments broadly support all students with disabilities, they do not specifically target youth on SSI. In addition, some state VR agency officials told us they do not determine an individual’s SSI status prior to the application process for VR services, and thus may not be able to capture data on the number of youth on SSI they serve through pre- employment transition services. Data and information sharing about transition-age youth on SSI—which could potentially facilitate provision of services—is limited between SSA, VR agencies, and Education, and officials said there are privacy considerations about sharing information. SSA does not systematically provide Education or VR agencies with data that would allow them to identify transition-age youth on SSI for outreach on services. VR agency staff we interviewed told us that when an individual applies to VR for services, the agency can query SSA on the individual’s disability benefits, including SSI benefits. However, this query can only be conducted on a case-by-case basis, some VR officials said, and VR agencies do not have access to broader information about the population of youth on SSI who would be eligible for VR services in their area. VR officials in four of five states where we conducted interviews said having such data would be beneficial because it would help them conduct outreach in a more focused way. Officials in the fifth state said that having such information might be useful because it would allow them to determine the extent to which they are reaching the population of youth on SSI. Education officials also told us that, if available and consistent with applicable privacy laws, VR agencies might use information on youth receiving SSI to conduct outreach to youth who may not be connected to the school system, such as youth who have dropped out of school, are homeless, migrants, or seasonal workers. Education officials within the Rehabilitation Services Administration also said that generally state VR agencies have the capacity to conduct outreach, and additional data from SSA would be helpful. Although access to more comprehensive SSA data might improve outreach by VR agencies, privacy concerns and other factors inhibit data sharing by SSA. Both SSA and Education have raised privacy and legal concerns about sharing SSI recipient data, indicating that such sharing may be prohibited under federal law; however, SSA has participated in other data sharing arrangements. In each instance, steps were taken to protect personal information and address privacy requirements. SSA officials told us that, as part of the Ticket to Work program, SSA has provided information on eligible beneficiaries to non-VR employment networks that asked for this information. While the agency stopped this practice in March 2015, due to privacy concerns, SSA officials said they are currently conducting an initiative in which SSA provides employment networks encrypted data on eligible beneficiaries via secure messaging. This includes the name and address or the name and phone number of potentially eligible beneficiaries for conducting outreach. SSA officials said they anticipated continuing this project for several months before evaluating the process and deciding how to proceed. However, SSA officials said they have not provided similar information to VR agencies because (1) the demand for VR services is generally high and negates the need to conduct outreach and (2) they do not have the legal authority to refer individuals to VR. In addition, under the PROMISE initiative, SSA provided identifying information on groups of SSI recipients, such as contact information, to facilitate outreach and enrollment. According to SSA and Education officials, that data sharing was permitted because it was conducted for research purposes. Nevertheless, steps were taken to ensure privacy requirements were met, such as obtaining consent from SSI youth and their parents or guardians about participating, and using unique identification numbers in lieu of Social Security numbers. Finally, SSA and Education have a data sharing agreement in place; however the shared data are currently not used for VR outreach or program management purposes. The data sharing agreement establishes procedures and conditions for the merging of SSA and Education administrative data to support research and program evaluation. The data sharing agreement specifies that SSA will remove personally identifiable information, including Social Security numbers, before sharing merged files with Education. Under this agreement, SSA has access to data on the number and the characteristics of individuals exiting the VR program each fiscal year, which according to Education, only includes closed, not open, service records. As such, neither SSA nor Education know the extent to which youth on SSI are receiving VR services. Education officials said that beginning in July 2017, due to requirements in WIOA, Education will begin collecting data from VR programs on a quarterly basis for both current program participants and those who have exited the program. Education and SSA officials said these data on open service records, in combination with SSA’s recipient data, could also be used to determine in a more regular and timely manner the total population of youth on SSI receiving VR services. However, SSA officials said without knowing exactly what the data will include, they do not know whether it will prove useful and therefore do not yet have any plans to analyze these data. SSA officials told us that without more information on how WIOA amendments to the Rehabilitation Act of 1973 will change services to youth, they were unsure what additional outreach should be conducted, or whether additional initiatives or data will be necessary to connect youth on SSI to VR services. Many individuals with disabilities want to work for the financial and personal rewards that employment can provide, and it is in SSA’s interest to help SSI recipients find employment and ultimately reduce or end their dependence on SSI. Helping transition-age youth on SSI engage in work prior to age 18 is critical given that research finds SSI recipients often continue receiving benefits for decades, resulting in high costs to the federal government. Yet few transition-age youth on SSI are working, and many of those who do work are not benefiting from provisions under federal law to exclude some of their income and retain still-needed benefits. While SSA maintains data on the number of SSI recipients who work and use work incentives and employment supports, by not regularly analyzing these data, SSA cannot ensure transition-age youth are receiving work incentives for which they are eligible. SSA does not know, for example, why many transition-age youth with income are not receiving the SEIE, which is targeted specifically to this population. In addition, lacking procedures that ensure systematic, consistent communication with youth and families about work incentives and redetermination rules, SSA is forgoing opportunities to encourage employment and potentially allay fears that may create a barrier to employment. SSA also states that it is hindered in its ability to connect transition-age youth to VR services because it is no longer allowed to directly refer them and SSI recipients under 18 are not included in its program that assists adults with making this connection. SSA must rely primarily on schools to make this connection for students. However, SSA is well positioned to work with Education to determine the extent to which transition-age youth on SSI are not being connected to VR services for which they are eligible, and to assess options to better ensure they receive them, as appropriate. Without further efforts, the program risks unwarranted costs and the youth it serves may be less likely to obtain self-sufficiency. We recommend that the Acting Commissioner of the Social Security Administration take the following actions: 1. Analyze the SEIE data to determine why a large proportion of transition-age youth on SSI with reported earnings did not benefit from the SEIE and, if warranted, take actions to ensure that those eligible for the incentive benefit from it. 2. Analyze options to improve communication about SSA-administered work incentives and the implications of work on SSI benefits, with a goal of increasing understanding of SSI program rules and work incentives among transition-age youth and their families. This should include, but not necessarily be limited to, updating SSAs procedures for staff meeting with SSI applicants, recipients, and their families to regularly and consistently discuss – when applicable—how work incentives can prevent reductions in benefit levels and how work history is considered during eligibility redeterminations. 3. Work with the Secretary of Education to determine the extent to which youth on SSI are not receiving transition services through schools that can connect them to VR agencies and services. 4. Explore various options for increasing connections to VR agencies and services, including their potential costs and benefits. One option, among others, could be to expand the Ticket to Work program to include youth. We provided a draft of the report to SSA and Education for their review. In written comments, SSA agreed with two of our recommendations, partially agreed with one, and disagreed with one. We have reproduced SSA’s and Education’s comments in appendices II and III. We have incorporated them—as well as technical comments provided—in the report, as appropriate. In its comments, SSA suggested GAO clarify SSI’s role as part of a broader social safety net, and explain that the SSI program does not have provisions for SSA to ensure recipients have access to a variety of services provided by federal and state agencies. We agree, and added additional information in the report suggested by SSA to help convey this. SSA also commented that the draft report did not accurately portray efforts to encourage work and explain work incentives to youth receiving SSI, citing efforts made to produce a new brochure targeting youth and directing WIPA projects to further target youth—both of which were results of the YTD project. We discussed SSA’s new brochure in our report, and stated that it will be helpful for some SSI beneficiaries. We also discussed SSA’s use of WIPA projects to provide benefits information and counseling, and noted that SSA has instructed WIPA projects to serve beneficiaries ages 14 and older and conduct outreach targeting transition-age youth. SSA agreed with our recommendation to analyze Student Earned Income Exclusion (SEIE) data to determine why a large portion of transition-age youth with reported income did not benefit from SEIE and take steps, if warranted, to ensure they do. SSA also agreed to explore various options for increasing connections to Vocational Rehabilitation (VR), stating that in addition to assessing options for referring youth to VR and/or changing the Ticket to Work program, the agency will continue to research other options for supporting transitioning youth. SSA partially agreed with our recommendation that SSA work with Education to determine the extent to which youth on SSI are not receiving transition services through schools that can connect them to VR. SSA noted its ongoing collaboration with Education and other agencies through the Promoting Readiness of Minors in SSI (PROMISE) project, stating the initiative is testing the provision of VR services to youth receiving SSI and will provide some evidence related to the role of schools and VR services for this population. SSA also stated it will continue to pursue research in this area. We agree that the PROMISE initiative has the potential to provide useful information on whether the services and supports provided improve education and employment outcomes for transition-age youth; however, a final PROMISE evaluation is not expected until winter 2022. In addition, the PROMISE initiative was not designed to determine the extent to which youth on SSI are receiving transition services through schools or are otherwise connected to VR services. SSA also noted that it works with Education and other agencies through the Federal Partners in Transition (FPT) Workgroup to improve the provision of transition services to students with disabilities, and that the FPT has issued a blueprint of agencies’ efforts. While the FPT can be a promising vehicle for helping connect youth on SSI to key transition services, as of September 2016, the FPT had not set timelines or milestones to achieve its broad goal to support positive outcomes for youth with disabilities, nor does it have a list or specific activities and tasks it will undertake. Therefore, we continue to believe additional collaboration by SSA with Education would be beneficial. SSA also noted several concerns related to complying with this recommendation, such as legal (privacy) concerns with data sharing, the capacity of state VR agencies to serve more individuals, and the receptivity of youth on SSI to receiving services. While we acknowledge that legal and privacy issues can present challenges to collaboration, we believe that SSA can take steps to explore actions it could take after considering such legal issues. We note that SSA has implemented approaches for sharing sensitive information under its Ticket to Work program, and prior surveys have yielded information to help understand how many youth on SSI have had an IEP at some point during school. Finally, while low state VR capacity or individual motivation can obstruct receipt of VR services, they should not prevent SSA from working with Education to determine the extent to which SSI youth are sufficiently informed of VR resources that are potentially available to them. SSA disagreed with our recommendation that it analyze options to improve communication about SSA-administered work incentives and the implications of work on SSI benefits. SSA stated that it already analyzed, and continuously monitors and solicits feedback on, options to improve communications. SSA also said it requires staff to meet with SSI recipients regularly and instructs staff to discuss relevant work incentives, and that there is no indication that staff are not providing youth with appropriate work incentive information. However, SSA did not explain how it knows or ensures that staff are providing this information. As noted in our report, staff in local SSA offices we visited told us that they do not regularly or consistently discuss work incentives with youth or families— when, for example, such information is not specifically requested, or if staff lack time. Further, SSA policies do not instruct staff to consistently convey information to youth and families on how work may or may not affect age 18 redetermination. SSA said that its new brochure provides information on age-18 redeterminations, as well as work incentives and other resources. While we acknowledged that the new brochure is a positive development, we noted that it could contain additional relevant information, for example, on Medicaid eligibility. We also noted that written information may not be sufficient for conveying complex information. We agree with SSA that WIPA projects play an important role providing work incentives counseling to SSI youth; however, as we noted in our report, WIPA projects have limited capacity for serving youth along with other SSI recipients and disability insurance beneficiaries. Therefore, we continue to believe that there are opportunities for SSA to improve its communication with transition-age youth and their families, including through in-person or telephone interactions. Education agreed to cooperate with SSA efforts to determine the extent to which youth on SSI are being connected to VR agencies and services through schools and on options to increase connections to VR agencies and services. Education noted that privacy statutes might complicate or limit use of student data. While we acknowledge that privacy laws can present challenges in this area, we believe that Education can take steps to explore actions it could take after considering these laws. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, the Acting Commissioner of the Social Security Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines 1) how the Social Security Administration (SSA) encourages employment for transition-age youth on Supplemental Security Income (SSI) as they move toward adulthood, and the effectiveness of these efforts, and 2) the extent to which SSA helps ensure transition-age youth on SSI receive vocational rehabilitation services. We focused our review on transition-age youth ages 14 to 17 because when SSI recipients turn 18, their eligibility for SSI is reassessed against adult criteria. Furthermore, our previous work has found that the transition from high school is an especially challenging time for individuals with disabilities because they are often no longer automatically entitled to transition services they received as students and must apply for and establish eligibility for services as adults, often from multiple agencies. In addition, research suggests that early interventions may improve later outcomes for youth with disabilities. To address both our research objectives, we reviewed relevant federal laws, regulations, policies, documents, and publications; interviewed SSA officials and staff and Education officials, advocates, and researchers; and conducted site visits to two states. In particular: To understand SSA’s approaches to encourage employment among transition-age youth, we reviewed SSA’s 2016 Annual Report of the Supplemental Security Income Program, most recent SSI Annual Statistical Reports, 2016 Red Book: A Summary Guide to Employment Supports for Persons with Disabilities Under the Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) Programs, various policies outlined in SSA’s Program Operations Manual System (POMS), the agency’s Fiscal Years 2014- 2018 Agency Strategic Plan, SSA “Spotlights” (handouts related to applying for and receiving SSI), and information about SSA’s Youth Transition Demonstration (YTD) and the Promoting the Readiness of Minors in Supplemental Security Income (PROMISE) initiative. To determine the extent to which youth benefit from SSA- administered work incentives or other employment supports, we reviewed or analyzed available data from SSA on the number of transition-age youth benefiting from various work incentives, as follows: For the Student Earned Income Exclusion (SEIE), the Plan to Achieve Self-Support (PASS), Impairment-Related Work Expenses (IRWE), and Blind Work Expenses (BWE), we analyzed data provided by SSA from its Disability Analysis File (DAF) for SSI recipients ages 14 to 17 for calendar years 2012, 2013, 2014, and 2015. A new version of the DAF is created each year in March and the 2015 file is the most current available version. According to SSA officials, SSA continues to update data as it obtains new information from recipients, so data from more recent years, while complete as of the time the data were obtained by SSA, do not reflect possible future changes to recipient files. Therefore, while we report data from each year, we focus on data from 2012 and 2013 which, according to SSA, provide a more complete picture of recipients with earned income and who benefited from work incentives and supports. We reviewed data from the Supplemental Security Record provided by SSA on the number of SSI recipients ages 18 and 19 who received continuing disability payments under Section 301 of the Social Security Disability Amendments of 1980. We also reviewed annual Work Incentives Planning and Assistance (WIPA) project reports by the Work Incentives Planning and Assistance National Training and Data Center and data provided by the Center. To assess the reliability of these data, we interviewed SSA officials, reviewed written answers to questions we provided to SSA, and reviewed available documentation. According to SSA officials, while producing the data we requested, the agency discovered a previously unknown error in its data transfer process. The error has resulted in a small proportion of cases in which earnings and work incentive data are not being correctly transferred from one data file to another. SSA officials estimated this issue affects approximately 0.1 percent of the overall population of transition-age youth ages 14 to 17, and approximately 5 percent of those with earnings. According to SSA, this suggests that the number of youth with earnings and the number benefiting from SSA-administered work incentives may be approximately 5 percent larger than reported. However, because SSA believes that the undercount is approximately the same for the number of transition-age youth with earnings and the number receiving SEIE, the percentage of these youth receiving SEIE would not change substantially. We do not believe this issue materially changes our findings—regarding low use of SEIE and low percentage of youth with earnings receiving SEIE—and we found the data to be sufficiently reliable for the purposes of our reporting objectives. We also conducted a literature review designed to identify research published over the last 10 years related to participation in, and outcomes related to, SSA-administered work incentives and demonstration projects that pertain to transition-age youth. Our search used broad key terms including those related to relevant work incentives, YTD and PROMISE, and vocational rehabilitation for youth on SSI. The search identified 218 studies or articles. After reviewing abstracts of studies for key parameters we determined that some studies were duplicative, some were outside the 10 year timeframe, and some were not published articles. This information, combined with a review of abstracts for key terms (such as confirming the studies discussed youth on SSI) enabled us to narrow the list to 111 results. We then conducted a more thorough review of study abstracts to determine whether the studies were not relevant, were suitable for background purposes (such as 40 articles about vocational rehabilitation), or were focused on SSI work incentives and demonstration projects. Ultimately, we identified 19 studies that focused on SSI work incentives and demonstration projects. Of these, 12 focused on YTD and 7 focused on incentives. However, 5 discussed work incentives in broad terms, for example describing that work incentives could help improve employment opportunities without discussing a specific example. Only 2 of the studies discussed specific work incentives—including a study over 10 years old that we included because it was the only study that discussed PASS—and none addressed the specific effects of these incentives on encouraging work. After determining the studies were methodologically sound, we incorporated key findings as appropriate in our report. To determine how youth on SSI and their families are informed about SSI program rules, work incentives, eligibility for VR, and other supports, we reviewed relevant procedures and information and notices provided by SSA to SSI recipients, their families, and their representative payees. We also interviewed staff in five SSA field offices in three states: two each in Florida and New York, and one in California. We selected the three states because they each served a large population of SSI recipients and youth on SSI and based on geographic variation. We further selected New York because it was participating in the ongoing PROMISE initiative. In each state, we interviewed a variety of SSA staff, including, for example, district managers, technical experts, claims representatives, Area Work Incentive Coordinators, and a Work Incentives Liaison, among others. At the California SSA field office we also observed an age 18 redetermination interview. In Florida and New York, we also interviewed staff at VR offices, a combination of school and school district personnel in six school districts, and staff at a WIPA project in each selected state. In New York, we also interviewed state officials responsible for implementing PROMISE as well as several PROMISE service providers. We interviewed these individuals to gather information about the services they provide to transition-age youth on SSI and their opinions on SSA’s effectiveness in encouraging work among this population and SSA efforts to connect these youth to VR services. The results of our interviews with SSA field office staff, VR officials and staff, school and school district personnel, and service providers are not generalizable, but provide insight into a variety of issues, including, how SSA and its staff communicate with transition-age youth on work-related issues; transition services these youth receive; and barriers they face to employment. To determine how SSA helps to ensure transition-age youth on SSI receive VR services available to SSI recipients, we reviewed SSA policies and interviewed SSA officials, state Disability Determination Service officials in the Florida and New York, and VR officials in five states— Florida, New Mexico, New York, Oklahoma, and Washington. Florida and New York were selected because these were the two states in which we conducted our site visits. New Mexico, Oklahoma, and Washington were selected based on variation in the size of the population of youth under 18 receiving SSI who are served by the VR agencies, the rate of successful employment outcomes for transition-age youth receiving VR services, and geography. The results of our interviews with state VR agency officials are not generalizable. We also collected data from state VR agencies in the five states on the number of transition-age youth on SSI with open VR service records in calendar year 2015, to analyze the extent to which youth were receiving VR services. The VR agencies from which we collected data did not all define “open service records” in exactly the same way. One state included only service records for which an individual plan for employment had been developed; the other states classified open service records to include individuals in other statuses, such as any individual who was beyond referral status. To assess the reliability of these data, we provided written questions to state VR officials and reviewed relevant documentation where available. We found the data were sufficiently reliable for our purposes. We compared the data provided by these state VR agencies to the number of transition-age youth in current pay status in 2015 according to data provided by SSA from the DAF. While SSA officials told us the number of recipients is unlikely to change significantly in its data, as noted previously, the DAF file is updated each year with new information obtained by SSA, and officials told us that the more recent years of the DAF are more likely to change than years further past. However, given the small number of transition-age youth with open VR service records in comparison to the number receiving SSI, any changes to the number of SSI recipients in the DAF would not significantly change the percentage of transition-age youth with open VR service records. When calculating the number of transition-age youth on SSI in the state, we counted any such youth who received SSI benefits for at least 1 month in the state. Some of these recipients did not live in the state for the entire year. In addition, we reviewed data provided by SSA on reimbursements the agency made to state VR agencies for successful work outcomes for transition-age youth on SSI for 2012 through 2014. We found SSA’s data to be sufficiently reliable for reporting on the extent to which transition-age youth benefited from SSA’s work incentives. We conducted this performance audit from February 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michele Grgich (Assistant Director), David Barish (Analyst-in-Charge), Divya Bali, Susan Chin, and MacKenzie Cooper made key contributions to this report. In addition, key support was provided by Susan Aschoff, James Bennett, Dan Concepcion, Alex Galuten, Gloria Proa, Monica Savoy, Almeta Spencer, Barbara Steel-Lowney, and Nicholas Weeks.
|
The number of individuals with disabilities under age 18 receiving SSI benefits increased by about 44 percent from 2000 through 2016. Youth ages 14 to 17 with disabilities face many challenges achieving self-sufficiency as they transition to adulthood. GAO was asked to examine SSA's efforts to encourage employment for these transition-age youth. This report examines 1) SSA efforts to encourage employment for transition-age youth on SSI as they move toward adulthood and their effectiveness; and 2) the extent to which SSA helps ensure these youth receive vocational rehabilitation services. GAO analyzed SSA data on work incentives for calendar years 2012-2015, the most recent available, and data from five state VR agencies for calendar year 2015; reviewed relevant laws, policies, and research; and interviewed SSA staff and state VR officials in several states chosen for their SSI youth populations and VR outcomes. The Social Security Administration's (SSA) primary approach for encouraging employment for transition-age youth (ages 14 to 17) with disabilities who receive Supplemental Security Income (SSI) is work incentives that allow them to keep at least some of their SSI benefits and Medicaid coverage while they work. But few transition-age youth benefit from these incentives. SSI is a means-tested program that provides cash benefits to eligible low-income aged, blind, and disabled individuals. SSA administers several work incentives that allow SSI recipients to exclude some income and expenses when calculating SSI benefits. The work incentive targeted specifically to younger SSI recipients is the Student Earned Income Exclusion (SEIE), which allows income to be excluded from benefits calculations if a recipient is a student under age 22. However, less than 1.5 percent of all transition-age youth—and generally less than half of those with earnings—benefited from SEIE in 2012 through 2015. SSA does not analyze these data, and thus cannot determine why the majority of youth with earnings are not benefiting from SEIE, when they may be eligible. SSA data also show that almost no youth benefited from other incentives that allow them to exclude earnings used for specific purposes, such as the Impairment-Related Work Expenses incentive. The effectiveness of SSA-administered work incentives may be further limited because, according to SSA and other officials, youth and their families are often unaware of or do not understand them, and may fear that work will negatively affect their benefits or eligibility. SSA policy requires staff to provide accurate and meaningful information about relevant SSI policies to claimants and recipients. However, GAO found that SSA does not have sufficient procedures in place to ensure that information on work incentives and how work affects benefits and eligibility is consistently communicated to youth and their families. As a result, SSA may miss opportunities to promote work incentives and other supports, allay fears, and potentially reduce dependence of transition-age youth on SSI benefits. SSA does not have a systematic way to connect transition-age youth on SSI to state Vocational Rehabilitation (VR) agencies that provide training and employment services under the VR State Grants program administered by the Department of Education (Education). Although youth receiving SSI are generally presumed to be eligible for VR services, GAO found that less than 1 percent had an open VR service record in 2015 in four of the five states from which GAO collected VR data. Legislation in 1999 created the Ticket to Work and Self-Sufficiency program, which expanded the number and types of employment service providers for individuals with disabilities. However, SSA limited eligibility to recipients age 18 and older. While transition-age youth receiving special education services can be connected to VR agencies through their schools, the extent to which this happens—and whether they are on SSI—is unknown because data to make such determinations are not systematically collected by SSA or schools. Federal standards for internal control call for agencies to use quality information to achieve their objectives. Without relevant data or additional options for connecting youth to VR services, SSA cannot ensure that transition-age youth on SSI are being connected to these services, which can help to prepare them for adulthood and the workforce. GAO recommends SSA 1) analyze why youth on SSI with earnings did not benefit from SEIE, 2) improve communication about work incentives and rules, 3) work with Education to determine how many youth on SSI are not connected to VR services, and 4) explore options to further connect them. SSA agreed in whole or in part with three recommendations. SSA disagreed that its communication on work incentives and rules needs to be improved, stating field staff provides information to youth, and it has created new written material. GAO maintains SSA's communication could be improved as presented in this report.
|
IPP project proposals are prepared and submitted to DOE by officials from the participating national laboratories. Each national laboratory provides technical and financial oversight for a set of projects. An Inter-Laboratory Board (ILAB) serves as the primary coordinating body for the national laboratories involved in the program. Partnerships are formed by the national laboratories between U.S. companies—known as industry partners—and institutes in Russia and other countries. IPP project proposals are reviewed by DOE’s national laboratories, the IPP program office, and other agencies before they are approved for funding. Because the national laboratory prepares the proposal, the laboratory project manager is responsible for including, among other things, a list of intended participants and for designating the WMD experience for each participant. The proposed participants are assigned to one of the following categories: Category I—direct experience in WMD research, development, design, production, or testing; Category II—indirect WMD experience in the underlying technologies of potential use in WMD; or Category III—no WMD-relevant experience. After the project passes an initial review within the national laboratory, it is analyzed by the ILAB and its technical committees, which then forward the proposal to DOE for review. DOE, in turn, consults with State and other agencies on policy, nonproliferation, and coordination considerations. DOE’s IPP program office is responsible for making final decisions on all projects. DOE requires that at least 65 percent of each IPP project’s funding be used as payments to individuals actually working on the project or to the participating institutes in payment for project-related supplies, equipment, and overhead. Because the IPP program is not administered through a government-to-government agreement, DOE distributes IPP funding through three tax-exempt entities to avoid paying foreign taxes. These organizations transfer funds directly to the personal bank accounts of IPP project participants. To receive payment, project participants must submit paperwork to these organizations indicating, among other things, whether they possess WMD experience. DOE has not accurately portrayed the IPP program’s progress in the number of WMD scientists receiving DOE support and the number of long- term, private sector jobs created. Many of the scientists in Russia and other countries that DOE has paid through its IPP program did not claim to have WMD experience. Furthermore, DOE’s process for substantiating the weapons backgrounds of IPP project participants has several weaknesses. In addition, DOE has overstated the rate at which weapons scientists have been employed in long-term, private sector jobs because it does not independently verify the data it receives on the number of jobs created, relies on estimates of job creation, and includes in its count a large number of part-time jobs that were created. Finally, DOE has not revised the IPP program’s performance metrics, which are based on a 1991 assessment of the threat posed by former Soviet weapons scientists. A major goal of the IPP program is to engage former Soviet weapons scientists, engineers, and technicians, and DOE claims to have supplemented the incomes of over 16,770 of these individuals since the program’s inception. However, this number is misleading because this figure includes both personnel with WMD experience and those without any WMD experience, according to DOE officials. We reviewed the payment records of 97 IPP projects, for which information was available and complete, and found that 54 percent, or 3,472, of the 6,453 participants in these projects did not claim to possess any WMD experience in the declarations they made concerning their backgrounds. We also found that DOE is not complying with a requirement of its own guidance for the IPP program—that is, each IPP project must have a minimum of 60 percent of the project’s participants possessing WMD-relevant experience prior to 1991 (i.e., Soviet-era WMD experience). We found that 60 percent, or 58, of the 97 projects for which we had complete payment information did not meet this requirement. Finally, many IPP project participants that DOE supports are too young to have contributed to the Soviet Union’s WMD programs. Officials at 10 of the 22 Russian and Ukrainian institutes we interviewed said that IPP program funds have allowed their institutes to recruit, hire, and retain younger scientists. We found that 15 percent, or 972, of the 6,453 participants in the payment records of the 97 projects we reviewed were born in 1970 or later and, therefore, were unlikely to have contributed to Soviet-era WMD efforts. While DOE guidance for the IPP program does not prohibit participation of younger scientists in IPP projects, DOE has not clearly stated the proliferation risk posed by younger scientists and the extent to which they should be a focus of the IPP program. In 1999, we recommended that, to the extent possible, DOE should obtain more accurate data on the number and background of scientists participating in IPP program projects. DOE told us that it has made improvements in this area, including developing a classification system for WMD experts, hiring a full-time employee responsible for reviewing the WMD experience and backgrounds of IPP project participants, and conducting annual project reviews. However, DOE relies heavily on the statements of WMD experience that IPP project participants declare when they submit paperwork to receive payment for work on IPP projects. We found that DOE lacks an adequate and well-documented process for evaluating, verifying, and monitoring the number and WMD experience level of individuals participating in IPP projects. According to DOE officials, IPP projects are scrutinized carefully and subjected to at least 8, and in some cases 10, stages of review to assess the WMD experience of the project participants. However, we found limitations in DOE’s process. Specifically: DOE has limited information to verify the WMD experience of personnel proposed for IPP projects because government officials in Russia and other countries are reluctant to provide information about their countries’ scientists. For example, three national laboratory officials stated that it is illegal under Russian law to ask project participants about their backgrounds, and that instead they make judgments regarding the WMD experience of the project participants on the basis of their personal knowledge and anecdotal information. Some IPP project proposals may advance from the national laboratories to DOE with insufficient vetting or understanding of all personnel who are to be engaged on the project. Senior representatives at five national laboratories told us that they and their project managers do not have sufficient time or the means to verify the credentials of the proposed project participants. DOE does not have a well-documented process for verifying the WMD experience of IPP project participants, and, as a result, it is unclear whether DOE has a reliable sense of the proliferation risk these individuals pose. DOE’s review of the WMD credentials of proposed project participants relies heavily on the determinations of the IPP program office. We examined the proposal review files that the program maintains, and we were unable to find adequate documentation to substantiate the depth or effectiveness of the program office’s review of the WMD experience of proposed IPP project participants. Because it can be a matter of months or longer between development of an IPP project proposal and project implementation, the list of personnel who are actually paid on a project can differ substantially from the proposed list of scientists. For several IPP projects we reviewed, we did not find documentation in DOE’s project files indicating that the department was notified of the change of staff or had assessed the WMD backgrounds of the new project participants. For example, one IPP project—to discover new bioactive compounds in Russia and explore their commercial application—originally proposed 27 personnel and was funded at $1 million. However, 152 personnel were eventually paid under this project, and we did not find an updated list of the project personnel or any indication of a subsequent review by DOE in the IPP project files. The limited information DOE obtains about IPP project participants and the limitations in DOE’s review of the backgrounds of these individuals leave the IPP program vulnerable to potential misallocation of funds. We found several instances that call into question DOE’s ability to adequately evaluate IPP project participants’ backgrounds before the projects are approved and funded. For example, a National Renewable Energy Laboratory official told us he was confident that a Russian institute involved in a $250,000 IPP project to monitor microorganisms under environmental stress was supporting Soviet-era biological weapons scientists. However, during our visit to the institute in July 2007, the Russian project leader told us that neither he nor his institute was ever involved in biological weapons research. As a result of this meeting, DOE canceled this project on July 31, 2007. DOE’s cancellation letter stated that the information provided during our visit led to this action. Although a senior DOE official described commercialization as the “flagship” of the IPP program, we found that the program’s commercialization achievements have been overstated and are misleading. In its most recent annual report for the IPP program, DOE indicated that 50 projects had evolved to support 32 commercially successful activities. DOE reported that these 32 commercial successes had helped create or support 2,790 new private sector jobs for former weapon scientists in Russia and other countries. In reviewing these projects, we identified several factors that raise concerns over the validity of the IPP program’s reported commercial success and the numbers of scientists employed in private sector jobs. For example: The annual survey instrument that the U.S. Industry Coalition distributes to collect information on job creation and other commercial successes of IPP projects relies on “good-faith” responses from U.S. industry partners and foreign institutes, which are not audited by DOE or the U.S. Industry Coalition. In 9 of the 32 cases, we found that DOE based its job creation claims on estimates or other assumptions. For example, an official from a large U.S. company told us that the number of jobs it reported to have helped create was his own rough estimate. We could not substantiate many of the jobs reported to have been created in our interviews with the U.S. companies and officials at the Russian and Ukrainian institutes where these commercial activities were reportedly developed. For example, officials from a U.S. company we interviewed claimed that 250 jobs at two institutes in Russia had been created, on the basis of two separate IPP projects. However, during our visit to the Scientific Research Institute of Measuring Systems in Russia to discuss one of these projects, we were told that the project is still under way, manufacturing of the product has not started, and none of the scientists have been reemployed in commercial production of the technology. The IPP program’s long-term performance targets do not accurately reflect the size and nature of the threat the program is intended to address because DOE is basing the program’s performance measures on outdated information. DOE has established two long-term performance targets for the IPP program—to engage 17,000 weapons scientists annually by 2015 in either IPP grants or in private sector jobs resulting from IPP projects, and to create private sector jobs for 11,000 weapons scientists by 2019. However, DOE bases these targets on a 16-year-old, 1991 National Academy of Sciences (NAS) assessment that had estimated approximately 60,000 at-risk WMD experts in Russia and other countries in the former Soviet Union. DOE officials acknowledged that the 1991 NAS study does not provide an accurate assessment of the current threat posed by WMD scientists in Russia and other countries. However, DOE has not formally updated its performance metrics for the IPP program and, in its fiscal year 2008 budget justification, continued to base its long-term program targets on the 1991 NAS estimate. Moreover, DOE’s current IPP program metrics do not provide sufficient information to the Congress on the program’s progress in reducing the threat posed by former Soviet WMD scientists. The total number of scientists supported by IPP grants or employed in private sector jobs conveys a level of program accomplishment, but these broad measures do not describe progress in redirecting WMD expertise within specific countries or at institutes of highest proliferation concern. DOE has recognized this weakness in the IPP program metrics and recently initiated the program’s first systematic analysis to understand the proliferation risk at individual institutes in the former Soviet Union. DOE officials briefed us on their efforts in September 2007, but told us that the analysis is still under way, and that it would not be completed until 2008. As a result, we were unable to evaluate the results of DOE’s assessment. DOE has yet to develop criteria for phasing-out the IPP program in Russia and other countries of the former Soviet Union. Russian government officials, representatives of Russian and Ukrainian institutes, and individuals at U.S. companies raised questions about the continuing need for the IPP program, particularly in Russia, whose economy has improved in recent years. Meanwhile, DOE is departing from the program’s traditional focus on Russia and other former Soviet states to engage scientists in new countries, such as Iraq and Libya, and to fund projects that support GNEP. Officials from the Russian government, representatives of Russian and Ukrainian institutes, and individuals at U.S. companies raised questions about the continuing need for the IPP program. Specifically: A senior Russian Atomic Energy Agency official told us in July 2007 that the IPP program is no longer relevant because Russia’s economy is strong and its scientists no longer pose a proliferation risk. Officials from 10 of the 22 Russian and Ukrainian institutes we interviewed told us that they do not see scientists at their institutes as a proliferation risk. Russian and Ukrainian officials at 14 of the 22 institutes we visited told us that salaries are regularly being paid, funding from the government and other sources has increased, and there is little danger of scientists migrating to countries of concern. Representatives of 5 of the 14 U.S. companies we interviewed told us that, due to Russia’s increased economic prosperity, the IPP program is no longer relevant as a nonproliferation program in that country. In economic terms, Russia has advanced significantly since the IPP program was created in 1994. Some of the measures of Russia’s economic strength include massive gold and currency reserves, a dramatic decrease in the amount of foreign debt, and rapid growth in gross domestic product. In addition, the president of Russia recently pledged to invest substantial resources in key industry sectors, including nuclear energy, nanotechnology, and aerospace technologies. Many Russian institutes involved in the IPP program could benefit from these initiatives, undercutting the need for future DOE support. In another sign of economic improvement, many of the institutes we visited in Russia and Ukraine appeared to be in better physical condition and more financially stable, especially when compared with their condition during our previous review of the IPP program. In particular, at one institute in Russia—where during our 1998 visit we observed a deteriorated infrastructure and facilities—we toured a newly refurbished building that featured state-of-the-art equipment. Russian officials told us that the overall financial condition of the institute has improved markedly because of increased funding from the government as well as funds from DOE. In addition, one institute we visited in Ukraine had recently undergone a $500,000 renovation, complete with a marble foyer and a collection of fine art. DOE has not developed an exit strategy for the IPP program, and it is unclear when the department expects the program to have completed its mission. DOE officials told us in September 2007 that they do not believe that the program needs an exit strategy. However, they acknowledged that the program’s long-term goal of employing 17,000 WMD scientists in Russia and other countries does not represent an exit strategy. DOE has not developed criteria to determine when scientists, institutes, or countries should be “graduated” from the IPP program, and DOE officials believe that there is a continued need to engage Russian scientists. In contrast, State has assessed institutes and developed a strategy—using a range of factors, such as the institute’s ability to pay salaries regularly and to attract external funding—to graduate certain institutes from its Science Centers program. We found that DOE is currently supporting 35 IPP projects at 17 Russian and Ukrainian institutes that State considers to already be graduated from its Science Center program and, therefore, no longer in need of U.S. assistance. DOE recently expanded its scientist assistance efforts on two fronts: DOE began providing assistance to scientists in Iraq and Libya, and, through the IPP program, is working to develop IPP projects that support GNEP. These new directions represent a significant departure from the IPP program’s traditional focus on the former Soviet Union. According to a senior DOE official, the expansion of the program’s scope was undertaken as a way to maintain its relevance as a nonproliferation program. DOE has expanded the IPP program’s efforts into these new areas without a clear mandate from the Congress and has suspended parts of its IPP program guidance for implementing projects in these new areas. Specifically: Although DOE briefed the Congress on its plans, DOE officials told us that they began efforts in Iraq and Libya without explicit congressional authorization to expand the program outside of the former Soviet Union. In contrast, other U.S. nonproliferation programs, such as the Department of Defense’s Cooperative Threat Reduction program, sought and received explicit congressional authorization before expanding their activities outside of the former Soviet Union. In Libya, DOE is deviating from IPP program guidance and its standard practice of limiting the amount of IPP program funds spent at DOE’s national laboratories for project oversight to not more than 35 percent of total expenditures. Regarding efforts to support GNEP, DOE has suspended part of the IPP program’s guidance that requires a U.S. industry partner’s participation, which is intended to ensure IPP projects’ commercial potential. Since fiscal year 1994, DOE has spent about $309 million to implement the IPP program but has annually carried over large balances of unspent program funds. Specifically, in every fiscal year from 1998 through 2007, DOE carried over unspent funds in excess of the amount that the Congress provided for the program in those fiscal years. For example, as of September 2007, DOE had carried over about $30 million in unspent funds—$2 million more than the $28 million that the Congress had appropriated for the IPP program in fiscal year 2007. In fact, for 3 fiscal years—2003 through 2005—the amount of unspent funds was more than double the amount that the Congress appropriated for the program in those fiscal years, although the total amount of unspent funds has been declining since its peak in 2003. Two main factors have contributed to DOE’s large and persistent carryover of unspent funds: the lengthy and multilayered review and approval processes DOE uses to pay IPP project participants for their work, and long delays in implementing some IPP projects. DOE identified three distinct payment processes that it uses to transfer funds to individual scientists’ bank accounts in Russia and other countries. These processes involve up to seven internal DOE offices and external organizations that play a variety of roles, including reviewing project deliverables, approving funds, and processing invoices. DOE officials told us that these processes were introduced to ensure the program’s fiscal integrity and acknowledged the enormity of the problem that the lag time between the allocation of funds, placement of contracts, and payment for deliverables creates for the IPP program and told us they are taking steps to streamline their payment processes. In addition, Russian and Ukrainian scientists at 9 of the 22 institutes we interviewed told us that they experienced delays in payments ranging from 3 months to 1 year. Delays in implementing some IPP projects also contribute to DOE’s large and persistent carryover of unspent funds. According to officials from U.S. industry partners, national laboratories, and Russian and Ukrainian institutes, some IPP projects experience long implementation delays. As a result, project funds often remain as unspent balances until problems can be resolved. These problems include implementation issues due to administrative problems, the withdrawal or bankruptcy of the U.S. industry partner, and turnover in key project participants. In part to address concerns about unspent program funds, DOE began implementing its Expertise Accountability Tool, a new project and information management system designed to better manage IPP projects’ contracts and finances, in October 2006. According to DOE officials, the system will allow instant sharing of IPP project data between DOE and participating national laboratories. DOE officials believe that the system will allow the IPP program office to better monitor the progress of IPP projects at the national laboratories, including reviews of IPP project participants’ WMD backgrounds and tracking unspent program funds. Mr. Chairman, this concludes my prepared statement. We would be happy to respond to any questions you or the other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Gene Aloise (Director), Glen Levis (Assistant Director), R. Stockton Butler, David Fox, and William Hoehn made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
During the decades before its dissolution, the Soviet Union produced a cadre of scientists and engineers whose knowledge and expertise could be invaluable to countries or terrorist groups trying to develop weapons of mass destruction (WMD). After the Soviet Union's collapse in 1991, many of these scientists suffered significant cuts in pay or lost their government-supported work. To address concerns about unemployed or underemployed Soviet-era weapons scientists, the Department of Energy (DOE) established the Initiatives for Proliferation Prevention (IPP) program in 1994 to engage former Soviet weapons scientists in nonmilitary work in the short term and create private sector jobs for these scientists in the long term. GAO was asked to assess (1) DOE's reported accomplishments for the IPP program, (2) DOE's exit strategy for the program, and (3) the extent to which the program has experienced annual carryovers of unspent funds and the reasons for any such carryovers. In December 2007, GAO issued a report--Nuclear Nonproliferation: DOE's Program to Assist Weapons Scientists in Russia and Other Countries Needs to Be Reassessed, (GAO-08-189)--that addressed these matters. To carry out its work, GAO, among other things, analyzed DOE policies, plans, and budgets and interviewed key program officials and representatives from 22 Russian and Ukrainian institutes. DOE has overstated accomplishments on the number of scientists receiving DOE support and the number of long-term, private sector jobs created. First, although DOE claims to have engaged over 16,770 scientists in Russia and other countries, this total includes both scientists with and without weapons-related experience. GAO's analysis of 97 IPP projects involving about 6,450 scientists showed that more than half did not claim to possess any weapons-related experience. Furthermore, officials from 10 Russian and Ukrainian weapons institutes told GAO that the IPP program helps them attract, recruit, and retain younger scientists and contributes to the continued operation of their facilities. This is contrary to the original intent of the program, which was to reduce the proliferation risk posed by Soviet-era weapons scientists. Second, although DOE asserts that the IPP program helped create 2,790 long-term, private sector jobs for former weapons scientists, the credibility of this number is uncertain because DOE relies on "good-faith" reporting from U.S. industry partners and foreign institutes and does not independently verify the number of jobs reported to have been created. DOE has not developed an exit strategy for the IPP program. Officials from the Russian government, Russian and Ukrainian institutes, and U.S. companies raised questions about the continuing need for the program. Importantly, a senior Russian Atomic Energy Agency official told GAO that the IPP program is no longer relevant because Russia's economy is strong and its scientists no longer pose a proliferation risk. DOE has not developed criteria to determine when scientists, institutes, or countries should "graduate" from the program. In contrast, the Department of State, which supports a similar program to assist Soviet-era weapons scientists, has assessed participating institutes and developed a strategy to graduate certain institutes from its program. Even so, we found that DOE is currently supporting 35 IPP projects at 17 Russian and Ukrainian institutes where State no longer funds projects because it considers them to have graduated from its program. In addition, DOE has recently expanded the program to new areas. Specifically, DOE began providing assistance to scientists in Iraq and Libya and, through the IPP program, is working to develop projects that support a DOE-led international effort to expand the use of civilian nuclear power. In every fiscal year since 1998, DOE carried over unspent funds in excess of the amount that the Congress provided for the program. Two main factors have contributed to this recurring problem--lengthy review and approval processes for paying former Soviet weapons scientists and delays in implementing some IPP projects. In its recent report, GAO recommended, among other things, that DOE conduct a fundamental reassessment of the IPP program, including the development of a prioritization plan and exit strategy. DOE generally concurred with GAO's findings, but does not believe that the IPP program needs to be reassessed.
|
Although depository institutions may have state or federal charters, all depository institutions (including banks, savings and loans, and thrifts) that have federal deposit insurance are supervised by a federal banking regulator. The federal banking regulators—which generally may issue regulations and take enforcement actions against institutions in their jurisdiction—are identified in table 1. These regulators issued the final regulations to implement the Basel III-based capital standards in the United States. Holding companies that own or control a bank or thrift are subject to supervision by the Federal Reserve. The Bank Holding Company Act of 1956 and the Home Owners’ Loan Act of 1933 set forth the regulatory frameworks for bank holding companies and savings and loan holding companies, respectively. The Dodd-Frank Act made the Federal Reserve the regulator of savings and loan holding companies. Basel III is part of the Basel Committee’s continuous effort to enhance the banking regulatory framework and builds on the previous accords (Basel I, II, and II.5). Basel I. Adopted in 1988, the Basel Capital Accord (Basel I) aimed to measure capital adequacy (that is, whether a bank’s capital is sufficient to support its activities) and establishes minimum capital standards for internationally active banks. It consists of three basic elements: a target minimum total risk-based capital ratio (the ratio of regulatory capital, the numerator, to risk-weighted assets, the denominator) of 8 percent and tier 1 risk-based capital ratio of 4 percent, a definition of capital instruments to constitute the numerator of the capital-to-risk weighted assets ratio, and a system of risk weights for calculating the denominator of the ratio. A bank’s risk-based capital ratio is the ratio of its regulatory capital to risk- weighted assets. Regulatory capital is the numerator of the ratio and risk- weighted assets constitute the denominator. In calculating a total risk- weighted asset figure, a bank’s total value of each asset is multiplied by a percentage reflecting its risk level and this adjusted amount is added across all assets. At a high level, the standardized approach to calculating risk-weighted assets involves multiplying the amount of the asset or exposure by the standardized risk weight (percent) associated with that type of asset or exposure. For example, a $1 million mortgage with a 50 percent risk-weighting would generate a risk-weighted asset of $500,000. If a bank were trying to hold capital equal to 10 percent of its risk-weighted asset, then it would need $50,000 of capital to hold against this mortgage. Bank capital rules prescribe the standardized risk weights and reflect regulatory judgment about the riskiness of an asset type or exposure. Holding equity (or the numerator) constant, a higher standardized risk weight results in a higher risk-weighted asset amount, which gives rise to a lower risk-based capital ratio. Over time, bank regulators realized that Basel I was not providing a sufficiently accurate measure of capital adequacy because of the lack of risk sensitivity in its credit risk weightings, financial market innovations such as securitization and credit derivatives, and advancements in banks’ risk measurement and risk management techniques. The accord was revised and enhanced multiple times after 1988 because of its shortcomings. For example, Basel I was amended in 1996 to take explicit account of market risk in trading accounts. Basel II. Adopted in June 2004, Basel II aimed to better align minimum capital standards with enhanced risk measurement and encourage banks It consists to develop a more disciplined approach to risk management. of three pillars: minimum capital requirements, a supervisory review of an institution’s internal assessment process and capital adequacy, and use of disclosures to strengthen market discipline as a complement to supervisory efforts. Basel II includes a standardized approach (which does not rely on banks’ internal models) and advanced approaches for measuring credit and operational risks. The advanced approaches generally are applied by large internationally active banks. The advanced approaches for credit risk and operational risk use parameters from a bank’s internal systems as inputs into a formula that supervisors developed for calculating risk-based capital ratios. In addition, banks with significant trading assets (which banks use to hedge risks or speculate on price changes in markets for themselves or their customers) must calculate capital for market risk using internal models. The advanced approaches allow some bank holding companies to reduce capital from Basel I. Large internationally active U.S. bank holding companies have been implementing the first phase—known as the parallel run—of the Basel II advanced approaches. As of February 21, 2014, eight advanced approaches bank holding companies had exited their parallel run, and the Federal Reserve and OCC jointly permitted them to use the advanced approaches to determine their risk-based capital requirements subject to the Collins Amendment floor. Banking organizations in most other industrialized countries are subject to the Basel II capital standards. In 2009, the Basel Committee issued Basel II.5 to enhance the measurements of risks related to securitization and trading book exposures. Basel III. Adopted in 2010 and revised in 2011 and 2013, Basel III aims to improve the banking sector’s ability to absorb shocks arising from financial and economic stress, whatever the source; improve risk management and governance; and strengthen banks’ transparency and disclosures. The reforms address bank-level, or micro-prudential, regulation to enhance the resilience of individual banking institutions in periods of stress and systemwide risks that can build up across the banking sector and the amplification of these risks over time. Basel III significantly changes the risk-based capital standards for banks and bank holding companies and introduces new leverage and liquidity standards. Liquidity is a measure of the ability and ease with which assets can be converted to cash. More specifically, the new standards include a new minimum common equity tier 1 capital requirement of 4.5 percent of risk-weighted assets (the capital needed to be regarded as a viable concern); a new capital conservation buffer of more than 2.5 percent of common equity tier 1 capital to provide a cushion to help companies remain above the 4.5 percent minimum during financial shocks and to avoid restrictions on distributions and discretionary bonus payments; and more stringent risk-weights on certain types of risky assets, particularly securitizations and derivatives. Basel III defines capital more narrowly than the previous accords. The new common equity tier 1 capital measure is limited mainly to common equity, because common equity generally is the most loss-absorbing instrument during a crisis. Basel III also includes a leverage ratio and two liquidity ratios (see table 2). In 2013, federal banking regulators adopted regulations to implement Basel III’s minimum regulatory capital ratios, capital conservation buffer ratio, countercyclical capital buffer, and supplementary leverage ratio (as applicable to advanced approaches banking organizations). These regulations apply to bank holding companies with assets of $500 million or more and all non-exempt savings and loan holding companies; national banks and federally chartered savings associations; and state-chartered banks (both non-member and member banks) and state savings associations. Certain savings and loan association holding companies with significant commercial or insurance underwriting activities or assets currently are exempt from the requirements of the U.S. Basel III capital regulation. The U.S. Basel III capital regulation seeks to improve the overall resilience of the banking system by imposing more stringent regulatory capital and related requirements on banking organizations. While the Basel III framework was primarily directed at internationally active banks, federal banking regulators generally apply the U.S. Basel III capital regulations to all banking organizations—maintaining that this approach will lead to a more stable and resilient system for banking organizations of all sizes and risk profiles. As shown in table 2, all banking organizations are subject to the standardized approach and minimum regulatory capital requirements, but advanced approaches banking organizations are also subject to additional requirements. Advanced approaches banks are defined as those with consolidated total assets of $250 billion or more or with consolidated total on-balance-sheet foreign exposure of $10 billion or more. The U.S. Basel III regulations generally provide until 2019 to phase in certain provisions in the regulatory capital requirements. In addition to meeting the minimum regulatory capital ratios, banking organizations must meet the capital conservation buffer to avoid restrictions on capital distributions and discretionary bonus payments to executive officers. Advanced approaches banking organizations are subject to the countercyclical capital buffer, supplementary leverage ratio, and liquidity coverage ratio. Moreover, under section 171 of the Dodd- Frank Act, the Collins Amendment (discussed below) advanced approaches banking organizations will be required to calculate their risk- based capital ratios using both the standardized and advanced approaches methodologies and use the lower of each capital ratio to determine compliance with minimum capital requirements. In response to public comments about the potential implementation burden on small banking organizations, the federal banking regulators made several revisions to the proposed U.S. Basel III regulations to help minimize the regulatory burden on such organizations. These revisions include retaining the existing risk weights for residential mortgages; giving all standardized approach banking organizations the option to elect to retain the current treatment of accumulated other comprehensive income in their regulatory capital; and grandfathering the regulatory capital treatment of trust preferred securities issued by banking organizations (less than $15 billion in assets as of 2009) before May 19, 2010. In addition to the Basel III framework, U.S. banking regulators have implemented several other major financial reforms and supervisory practices covering banking organizations. They include the following: Dodd-Frank stress tests. Under the Dodd-Frank Act, banking organizations with consolidated assets of more than $10 billion must conduct and report on an annual company-run stress test. Nonbank financial companies supervised by the Federal Reserve and bank holding companies with more than $50 billion in consolidated assets must also conduct semi-annual stress tests. The act requires that the banking agencies issue regulations that establish methodologies for the conduct of the company-run stress-tests that provide for at least three different sets of economic conditions, establish the form and content of the report that the companies must submit to the regulators, and require companies to publish a summary of the results of the required stress tests. In October 2012, the Federal Reserve, FDIC, and OCC issued final rules implementing the company-run stress test requirements. Community banks with less than $10 billion in total assets are not required or expected to conduct the types of stress testing specifically articulated in the regulations directed toward larger organizations. For bank holding companies with $50 billion or more in assets and nonbank financial companies designated for supervision by the Federal Reserve, the Federal Reserve must conduct an annual supervisory stress test to evaluate whether the company has sufficient capital to absorb losses as a result of adverse economic conditions. The Federal Reserve must publish a summary of the supervisory stress test results. Capital planning. Pursuant to the Federal Reserve’s capital plan rule and related supervisory process, the Federal Reserve assesses the internal capital planning process of each bank holding company with total consolidated assets of $50 billion or more and its ability to maintain sufficient capital to continue its operations under stressful conditions. Under the capital plan rule a bank holding company must submit an annual capital plan or planned capital distribution in which it demonstrates that it can maintain capital ratios above minimum regulatory requirements and a tier 1 common equity ratio greater than 5 percent under stressed economic and financial market conditions. The capital plan must include detailed descriptions of all planned capital actions: the company’s internal processes for assessing capital adequacy; the policies governing capital actions such as common stock issuance, dividends, and share repurchases; and all planned capital actions over a 9-quarter planning horizon. If the Federal Reserve objects to its capital plan, a bank holding company may not make any capital distributions, unless approved in writing by the Federal Reserve. Activity restrictions. The final rule implementing Section 619 of the Dodd-Frank Act, commonly known as the Volcker rule, was adopted by the Federal Reserve, FDIC, OCC, and the Securities and Exchange Commission on December 10, 2013. The Volcker final rule prohibits insured depository institutions and companies affiliated with insured depository institutions, from engaging in short-term proprietary trading of certain securities, derivatives, commodity futures, and options on those instruments for their own accounts. The final rule also imposes limits on banking entities’ investments in hedge funds or private equity funds, subject to certain exceptions. Minimum capital requirements. Section 171 (b) of the Dodd Frank Act (Collins Amendment) requires federal banking agencies to apply to, among others, U.S. depository institution holding companies and systemically significant nonbank financial companies, the minimum risk-based and leverage capital requirements that apply to insured depository institutions. The minimum requirements cannot be quantitatively lower than the capital requirements that were in effect when the Dodd-Frank Act was enacted. The vast majority of banks and bank holding companies already would likely be able to meet the new minimum capital requirements and capital conservation buffer at the fully phased-in levels required by 2019. We estimated that as of first quarter 2014 more than 90 percent of bank holding companies currently meet the new requirements and that those with insufficient capital would need to raise about $4 billion to 5 billion in capital to cover the capital shortfall and meet the requirements. Our analysis also suggests that most of the bank holding companies and depository institutions that did not hold sufficient capital to meet the Basel III minimums, plus the capital conservation buffer, are relatively small, with assets of less than $1 billion. The empirical findings from our literature review and analysis of the capital shortfall suggest that the higher capital requirements likely will have a modest effect on the cost and availability of credit. Some market participants we interviewed (eight community banks and 10 G-SIBs) generally expected the U.S. capital requirements to increase compliance costs but have a limited effect on the cost and availability of credit. Our analysis suggests that as of the first quarter of 2014, the majority of bank holding companies and depository institutions met U.S. Basel III minimum capital ratios, including the capital conservation buffer, at the fully phased-in levels required by 2019. Furthermore, the total amount of capital these institutions would need to meet Basel III ratios—the capital shortfall—is relatively modest. To estimate the extent to which bank holding companies and depository institutions already met the fully phased-in Basel III minimum capital ratios, we analyzed balance sheet data for bank holding companies and depository institutions for the first quarter of 2014. We estimated common equity tier 1 capital, tier 1 capital, total capital, and risk-weighted assets using calculations consistent with the regulations federal banking regulators adopted in 2013, which changed the formulas for calculating these amounts. In addition, we report the estimates separately for the 16 advanced approaches bank holding companies (including their bank subsidiaries), which accounted for nearly 75 percent of the total assets held by top-tier U.S. bank holding companies but less than 2 percent of the number of all such holding companies (as of the first quarter of 2014). A majority of bank holding companies and depository institutions that we analyzed currently would meet each of the separate Basel III minimum capital requirements if the regulations took effect immediately without a phase-in period. As shown in table 3, our analysis suggests that 953 of the 1,040 of the bank holding companies (over 92 percent) currently hold sufficient capital to meet the new minimum common equity tier 1 capital ratio, plus the capital conservation buffer. Similarly, 6,687 of the 6,794 depository institutions (about 98 percent) currently hold sufficient capital to meet the new minimum common equity tier 1 capital ratio, plus the capital conservation buffer. Our analysis also suggests that most of the bank holding companies and depository institutions that did not hold sufficient capital to meet the Basel III minimums, plus the capital conservation buffer, are relatively small, with assets of less than $1 billion. For a more detailed analysis of the minimum Basel III capital ratios presented throughout this section, see appendix II. Estimated capital ratio greater than or equal to Basel III minimum? No Common equity tier 1 capital ratio plus capital conservation buffer (7.0 percent) The capital shortfalls for individual bank holding companies and depository institutions that did not meet the Basel III minimum capital ratios appeared to be relatively modest in some cases but may be significant in others. For example, as shown in table 4, our analysis suggests that most bank holding companies that did not meet the new minimum common equity tier 1 capital ratio, plus the capital conservation buffer, would need to raise no more than $0.01 billion ($10 million) in additional common equity tier 1 capital—about 1.65 percent of assets— to meet the new requirements. However, at least one of these bank holding companies may need to raise at least $1.12 billion—about 3.39 percent of its assets. Similarly, most depository institutions that did not meet the new minimum common equity tier 1 capital ratio, plus the capital conservation buffer, would need to raise less than $0.01 billion in additional common equity tier 1 capital, or about 1.52 percent of total assets, to meet the new requirements. However, some of these depository institutions would need to raise capital in excess of 2.4 percent of their assets. Finally, as shown in table 5, our estimates of the total capital shortfall for all bank holding companies and depository institutions are relatively modest. For example, bank holding companies that did not meet the Basel III minimum common equity tier 1 capital ratio, plus the capital conservation buffer, would need to raise about $4.73 billion in common equity tier 1 capital to eliminate the capital shortfall. This amount equals about 0.03 percent of the combined total assets of all the bank holding companies we analyzed. Similarly, depository institutions that did not meet the minimum common tier 1 capital ratio, plus the capital conservation buffer, would need to raise about $0.76 billion to eliminate the capital shortfall. This amount equals about 0.01 percent of the combined total assets of all depository institutions. Our estimates of the numbers of bank holding companies and depository institutions with capital ratios exceeding Basel III minimums and the capital shortfall are subject to limitations. Most importantly, the amounts of some balance sheet and income statement items used to calculate the amount of capital or the amount of risk-weighted assets cannot be observed for bank holding companies or depository institutions not subject to the advanced approaches rule. We made assumptions about these unobservable amounts that are similar to assumptions the Federal However, we cannot assess Reserve made for a comparable analysis.the extent to which our estimates overstate or understate the numbers of bank holding companies and depository institutions that already meet Basel III capital requirements or the capital shortfall. In addition, some bank holding companies and depository institutions may prefer to maintain a capital buffer in excess of the required minimum levels to satisfy investors or other market participants. Thus, our estimates may understate the number of bank holding companies and depository institutions that would need to raise capital and also may understate the amount of capital they would need to raise. In addition, our analysis suggests that raising capital to cover the capital shortfall would have a modest effect on bank holding company and depository institution funding costs. Funding costs are determined by the prices of equity and debt financing sources and the amounts used of each. Because interest payments on debt are tax-deductible, a more leveraged capital structure reduces corporate taxes, lowering funding costs. Thus, an increase in the required amount of equity capital would increase a bank’s cost of capital. The increased funding cost associated with a 1 percentage point increase in the capital ratio of a bank holding company or depository institution is approximately equal to the difference between the return on equity and the after-tax interest rate on debt, all else being equal. For bank holding companies and depository institutions that as of the first quarter of 2014 did not hold sufficient capital to meet the fully phased-in U.S. Basel III capital requirements, our estimates of the increase in funding cost associated with raising capital up to the minimum requirements are relatively modest. we estimated the amount of common equity tier 1 capital that the median capital-deficient bank holding company would need to raise to meet the minimum common equity tier 1 capital requirement, plus capital conservation buffer, to be about $10 million, or 1.65 percent of its total assets. As shown in table 6, the increase in funding cost associated with raising this amount of common equity tier 1 capital is about 0.13 percentage points, or about $13,000 for a bank holding company with $10 billion in assets.raising the median amount of common equity tier 1 capital for a capital- deficient depository institution is about 0.11 percentage points, or about $11,000 for a depository institution with $10 billion in assets (based on the amount of such capital the median capital-deficient depository institution would need to raise to meet the minimum requirements). Our estimates of the increase in funding costs associated with raising capital are subject to several limitations. First, as discussed above, our estimates of the capital shortfall are subject to limitations and may either overstate or understate the amount of capital that bank holding companies and depository institutions raise in response to the Basel III requirements. Because the increase in funding costs is related to the size of the capital shortfall, our estimates of the increase in funding costs also may be either overstated or understated. In particular, some bank holding companies or depository institutions may maintain capital in excess of the minimum requirements (a capital buffer). The larger the capital buffer, the more funding costs would increase and the more our estimates would understate them. Our estimates also reflect the median amounts of capital required by bank holding companies and depository institutions we estimated would have insufficient capital to meet Basel III requirements and may not reflect the specific circumstances of an individual bank holding company or depository institution that may need to raise capital and may overstate or understate the change in its funding costs. Furthermore, our estimates reflect the median return on equity and interest rate on debt that prevailed in the first quarter of 2014, as well as our assumption of a corporate income tax rate of 35 percent. However, equity returns, debt interest rates, and tax rates may change, altering the relative prices of debt and equity and thus altering the change in funding costs associated with substituting equity for debt. Finally, our estimates assume that the return on equity will not change when a bank holding company or depository institution increases its capital ratio. However, increasing reliance on equity funding reduces the risks to investors, all else being equal. If a bank holding company or depository institution increased its ratio of capital to assets, then the return on its equity could fall as investors demanded less of a risk premium. Although the U.S. Basel III capital requirements may have little impact on the capital level and structure of most banking organizations, their full impact remains uncertain. The capital regulations will be phased in over multiple years and Basel III is but one of a multitude of regulatory reforms affecting banking organizations. The higher regulatory capital ratios may increase the amount of capital banks hold (if they have to hold more capital than they otherwise would have held based on their assessment of economic risk), which could increase their funding costs. The increase in funding costs may result if holding higher capital meant that bank investors, were not willing to accept a lower return on equity. In addition, banking organizations will incur compliance costs, such as for additional staff training and expenses related to new systems or modification of existing systems for calculating regulatory capital ratios and for recordkeeping and reporting. For example, in the interim final rule, FDIC estimated that each bank with $175 million or less in total assets will incur $43,000 in direct compliance costs, which it concluded would represent a significant burden for about 37 percent of these banks.approaches banks, only one included any compliance cost information in its annual report, indicating it devoted thousands of staff hours to comply with Basel requirements. We discuss below our review of empirical studies and our quantitative analysis the effects of Basel III requirements on the cost and availability of credit. Some banks and others generally maintain that equity is more expensive than debt; thus, higher capital requirements will raise their funding costs. If this were the case, banks might charge higher prices for loans, depending on market competition (which could result in less borrowing); reduce certain lending; or exit certain lines of business if the return on capital was insufficient. In contrast, two non-empirical studies maintain that higher capital costs will not increase bank funding costs, because the increase in capital will make banks safer and cause investors to accept a lower return. We reviewed 11 studies—published from 2011 through 2014—that empirically examine the effects of higher capital requirements on banks (or lenders), including on cost of capital and the cost and availability of credit. To identify relevant empirical studies, we conducted searches of two databases, (ProQuest and EconLit) and identified and selected economic studies from peer-reviewed journals and working papers from governmental institutions that were published from 2011 through 2014. We used search terms for selecting the studies, such as interest rate spread, credit availability, cost of capital, and partial equilibrium. The results of the studies generally indicate that higher capital ratios—both tier 1 capital and common equity tier 1 capital ratios—in the United States will result in a modest increase in the cost of capital for banks and loan rates for borrowers and a modest decrease in the quantity of loans for banks. However, the studies also noted that capital requirements are one of several policies that can affect the cost and availability of credit. Some of the studies analyze the effect of capital requirements in other countries, which helps put the estimated effect of Basel III in the United States into a broader perspective. Bank funding cost. Two studies examining the effect of higher capital requirements on the capital costs of banks generally found that raising capital requirements will increase the capital costs of banks. One of the studies estimated that increasing the common equity tier 1 capital ratio by 1.3 percentage point would increase the cost of capital for large banks in the United States by 0.13 percentage point. The study covered eight countries, and the estimates ranged from 0.00 percentage points (Canada) to 0.26 percentage points (Japan). The other study estimated that a 10 percentage point increase in the tier 1 ratio would increase the cost of capital in the United States between 0.60 and 0.90 percentage points. These results generally are consistent with our analysis, in which we estimated that the increase in funding cost associated with a 1 percentage point increase in the ratio of capital to assets was from about 0.07 to 0.09 percentage points as of the first quarter of 2014 (see app. II). Cost of borrowing. Nine studies examining the effect of higher capital requirements on loan rates had results ranging from no effect to an increase in loan rates. The studies generated estimates of the effect of higher capital requirements on borrower costs, with some covering multiple countries in North and South America, Europe, or Asia. Two studies covering the United States estimated that a 1 percentage point increase in capital requirements would increase bank lending rates by a 0.12 percentage point and 0.21 percentage point, respectively. The other two studies that covered the United States estimated that a 1.3 and 2.0 percentage point increase in capital requirements would increase bank lending rates by a 0.17 percentage point and 0.51 percentage point, respectively. In comparison, the studies covering other countries estimated that a 1 percentage point increase in capital requirements would increase bank lending rates around 0.04 to 0.25 percentage points (and a 1.3 percentage point increase would increase bank lending rates from 0.0 to 0.34 percentage points). Quantity of loans. Four of the studies examining the effect of higher regulatory capital requirements on the availability of credit found that higher requirements would reduce the quantity of loans supplied, but the estimated effect varied across the studies. As bank lending rates increase, some of the studies generally expect the demand for loans to be less, thereby reducing the quantity of loans made by banks. Two of the studies covered the United States and estimated that a 1.3 percentage point increase in common equity tier 1 capital or 2.0 percentage point increase in tier 1 capital ratio—will decrease the quantity of loans by 2.97 percent and 8.71 percent, respectively. In comparison, one of the studies also covered countries in Asia, Europe, and North America and estimated that a 1.3 percentage point increase in regulatory capital requirements will decrease the quantity of loans in these countries, but the estimates vary across countries—ranging from a 0.16 percent decline to a 32.61 percent decline. Like the studies we reviewed, our analysis suggests that raising capital to cover the capital shortfall would have a modest effect on the cost and availability of credit in both the short and the long run. As discussed previously, the total amount of capital that bank holding companies and depository institutions would need to raise to cover the capital shortfall and meet the new minimum capital ratios would be small relative to total assets, likely less than 1 percent. In addition, most bank holding companies and depository institutions do not appear to need to raise capital to meet minimum requirements. For those that do, the amount of capital they need to raise appears to be small relative to total assets in some cases but could be large in others. To assess the short-run impact on the cost and availability of credit for bank holding companies or depository institutions raising capital to meet minimum requirements, we used (1) estimates of changes in loan volumes and loan spreads associated with changes in capital from our prior work and (2) our estimates of the capital shortfall described above. To assess the long- run impact, we used an existing loan pricing model. The short-run impact of meeting the new capital requirements on the cost and availability of credit likely would be small. In prior work, we estimated that a 1 percentage point increase in the ratio of capital to assets is associated with a short-run increase in loan spreads of about 0.16 percentage points and a short-run decline in loan volume growth of about 1.2 percentage points. Our analysis of the capital shortfall suggests that bank holding companies would need to increase total capital by about 0.03 percent of total assets to meet the new minimum total capital ratio plus the capital conservation buffer. If bank holding companies raised the capital to cover the shortfall in a single quarter, these estimates suggest that covering the capital shortfall would lead to an increase in loan spreads of less than 0.01 percentage points and a decline in loan volume growth of less than 1 percentage point. Our estimates also suggest that the long-run impact of meeting the new capital requirements on the cost and availability of credit also likely would be small. To assess the potential impact on loan rates, we used an existing loan pricing model that captures key determinants of loan rates in the long run, including funding costs, credit spreads, and administrative costs. As discussed above, funding costs for bank holding companies and depository institutions that increase equity capital to meet Basel III minimum capital ratios could increase. Bank holding companies and depository institutions can respond to changes in their funding costs in several ways, including raising loan rates, shifting lending activity to lower-risk borrowers, and increasing efficiency. If bank holding companies and depository institutions that have to raise capital covered their increased funding costs solely by increasing their lending rates, our estimates of the funding cost changes are indicative of the amounts by which lending rates at these institutions would increase—generally less than 0.3 percentage points. However, some factors may cause lending rates to increase by less than this amount. The extent to which bank holding companies and depository institutions can raise lending rates is limited by the amount of competition they face from other lenders— including lenders that already hold sufficient capital—as well as other factors. Thus, bank holding companies and depository institutions that need to raise capital may cover their increased funding costs by other means in addition to, or instead of, raising lending rates. For example, they could increase lending to lower risk borrowers and reduce lending to higher risk borrowers in order to reduce credit spreads, or they could reduce salaries or employ fewer people to lower administrative costs. In this case, lending rates would increase by less than the amount that funding costs increase. Our estimates of the impact of meeting the new capital requirements on the cost and availability of credit are subject to limitations and should be interpreted with caution. As discussed, our estimates of capital shortfall and the increase in funding costs associated with raising capital to eliminate the shortfall are subject to important limitations that could lead us to overstate or understate them. Because the change in lending rates is related to both the capital shortfall and the associated increase in funding costs, it too may be overstated or understated. For example, if bank holding companies or depository institutions maintain a capital buffer in excess of the minimum amount of capital required, then the increase in lending rates likely will be greater than our estimates of the increase. In addition, past macroeconomic and credit market conditions heavily influence the methodology we used to estimate the short-run response of loan spreads and loan volume growth to changes in the ratio of capital to assets, so the estimates may not apply to future periods if macroeconomic and credit market conditions were significantly different. Furthermore, the model we used to estimate the long-run response of loan rates to changes in the ratio of capital to assets may not reflect all of the determinants of loan rates. Although their views are not indicative of the banking industry as a whole, bank officials we interviewed generally expected that they would be able to meet new capital requirements, their compliance costs would increase, and effects of the requirements on credit would not be large. According to officials from all eight community banks we interviewed, they did not anticipate any difficulties in meeting the U.S. Basel III capital requirements but expect to incur additional compliance costs. Because we interviewed a relatively small number of community banks, compared with the overall population of banks with assets of less than $10 billion, we cannot generalize the responses. All the officials said their banks were well capitalized in excess of Basel III capital ratio requirements and they did not anticipate having to raise any additional capital or take additional actions to meet the heightened capital requirements. At the same time, the community bank officials generally told us that they have been incurring additional compliance costs because of the new requirements, but none could quantify the costs. For example, five officials said that they will need to update their information technology systems or purchase software to comply with enhanced reporting and recordkeeping requirements. Two told us that they consulted (or expect to consult) with accountants, attorneys, or both to understand the Basel III capital requirements and the implications for their banks. Additionally, six told us that their staffs have been devoting more of their time to comply with the new capital requirements, but none said they have hired or planned to hire additional staff. Finally, four officials told us that several revisions the federal banking regulators made to the regulations—particularly those involving the risk weights for residential mortgages, accumulated other comprehensive income, and trust preferred securities—helped to minimize their regulatory burden. Similarly, officials from an industry association told us that community banks fared well under the capital regulations, which addressed most of the concerns raised by the association about the proposed regulations. Consistent with the findings from our review of the literature and analysis of the capital shortfall, the officials from some community banks told us that they generally expected the U.S. Basel III capital requirements to have a limited effect on the cost and availability of credit. Specifically, four said they did not expect the new requirements to hamper their ability to lend to their customers. However, several said that the higher capital requirements for high-volatility commercial real estate might reduce their lending in this area. Three officials told us that they expect tighter underwriting standards to make it more difficult and expensive for marginal customers to borrow, but five expected loan prices to increase. Officials from two banks mentioned that competition from other institutions, such as credit unions, could affect loan pricing. Officials from the 10 U.S. and foreign G-SIBs (large, internationally active banks) that we interviewed told us that the U.S. Basel III’s minimum capital requirements generally tended not to act as binding capital constraints on them. Instead, three of the banks told us that U.S. G-SIBs are subject to stress-testing under Comprehensive Capital Analysis and Review (CCAR) by the Federal Reserve, and the capital requirements under CCAR typically are higher than the minimum Basel III requirements. To be able to pay dividends to shareholders, the G-SIBs must meet the capital requirements set under CCAR. In addition, officials from four U.S. G-SIBs said that the supplementary leverage ratio would be more onerous or costly with which to comply than the risk-based capital requirements. The officials from all the U.S. G-SIBs we interviewed said that they have expended significant resources in terms of staff and money to implement and comply with the U.S. Basel III capital requirements. For example, they said that they have had to hire additional staff and develop new technology and infrastructure to comply with the regulations. Three told us that under the Collins Amendment they have had to calculate a total of six capital ratios—three using the advanced approaches and three using the standardized approach—which is significantly burdensome. But none of the U.S. G-SIBs could provide us with a precise estimate of their compliance costs, in part because Basel III implementation has been done in conjunction with other regulatory reforms, such as the Dodd- Frank Act, and in part because staffs from many departments were involved in implementation. However, several officials told us the costs have been running into the millions of dollars and included significant staff hours. According to officials from the 10 G-SIBs, the Basel III capital requirements are expected to have a mixed effect on their lending and lines of business. Four of the G-SIBs generally told us they expected the high-capital requirements to affect lending—namely by reducing the availability of credit or increasing costs for borrowers (or both). More specifically, two G-SIBs said the capital requirements will have some effect on the mortgage market, but one also noted that other factors may have a greater effect on the market (because it is a highly competitive market). Two officials told us that the U.S. Basel III regulations may cause mortgage servicing assets to move from the banking sector to the non- banking sector. In particular, one said that 25 percent of all U.S. mortgage servicing rights assets have moved outside of the banking sector because of the new regulatory capital requirements. In addition, the G-SIBs said that they generally expected the new leverage and liquidity requirements, along with the capital requirements, to reduce certain of their business activities, particularly their derivatives and short- term securities financing transactions. Officials from a number of the community banks and G-SIBs told us that they expect the U.S. Basel III capital regulation to improve the resiliency of the U.S. banking system. Specifically, officials from two community banks said that they expected the capital regulations to improve the safety and soundness of the banking system, but three community banks questioned the appropriateness of the regulations for small banks. Officials from nine of the G-SIBs said the regulations generally would make the U.S. banking system safer, because higher capital and liquidity reduce risks to the banking system. At the same time, some said that the capital regulation could create other vulnerabilities that made the financial system less stable—for example, by shifting the risk outside of regulated banks or by reducing the willingness of banks to hold risky assets during times of market stress. Differences in regulatory capital requirements across jurisdictions could affect competition between internationally active banks. For example, higher capital costs driven by higher regulatory capital requirements could result in a competitive disadvantage for banks that compete for similar customers with banks subject to lower capital requirements. As have the previous Basel accords, Basel III serves, in part, to limit competitive advantages or disadvantages due to such differences. For example, one of the two fundamental objectives of the initial Basel accord was that standards should be fair and applied with a high degree of consistency to banks in different countries with a view to diminishing an existing source of competitive inequality among international banks. As specified in its charter, the Basel Committee’s activities include monitoring the implementation of Basel standards in member countries to help ensure their timely, consistent, and effective implementation and contribute to a level playing field among internationally active banks. At the same time, there are limitations to full harmonization. As was the case with the implementation of Basel II, some market participants or observers have raised concerns about regulatory differences in the implementation of Basel III between jurisdictions and their possible competitive effects. According to the Basel Committee’s October 2014 progress report, 25 of its 28 members reported having regulations in effect to implement Basel III’s higher capital requirements (see table 7). However, three Basel Committee members reported they had not yet implemented all the Basel III capital requirements, namely the conservation and countercyclical buffers. In September 2014, U.S. regulators finalized their liquidity coverage ratio regulation, but it will not take effect until January 2015. In addition to the risk-based capital standards, the Basel Committee monitors implementation of the additional loss absorbency requirements for G-SIBs and domestic systemically important banks, liquidity coverage Basel Committee members generally reported ratio, and leverage ratio.that they have not yet adopted regulations to implement these requirements. The Basel Committee’s assessments of Basel III implementation found that the jurisdictions reviewed to date have adopted rules generally consistent with the Basel III standards but identified some inconsistencies in regulator-approved bank models across countries. According to the Basel Committee, public confidence in prudential ratios, resiliency of banks, and a level regulatory playing field for internationally active banks cannot be achieved without consistency in the adoption and implementation of the Basel standards. Recognizing the importance of Basel III’s implementation, the Basel Committee established its Regulatory Consistency Assessment Program (RCAP) in 2012.assessments are designed as peer reviews undertaken by technical experts from member jurisdictions and are done on a jurisdictional and thematic basis. The Basel Committee has completed seven jurisdictional assessments and generally found the jurisdictions compliant; additionally, the committee reviewed European Union (EU) and U.S. draft regulations but did not assign an overall compliance grade, because the rules still were in draft form at the time of the review. Jurisdictional assessments review the extent to which national Basel III regulations in each member jurisdiction align with the Basel III minimum requirements. They examine the consistency and completeness of the adopted standards, including the prudential significance of any deviations in the standards. According to the Basel Committee, the assessments help highlight the current and potential impact of any gaps in the regulatory regime, and help member jurisdictions undertake reforms needed to strengthen their regulatory regimes. Each member jurisdiction has agreed to undergo an RCAP assessment, and the Basel Committee has given priority to jurisdictions in which G-SIBs are domiciled.the Basel III risk-based capital standards, but will be expanded to cover the Basel standards relating to liquidity, leverage, and systemically important banks. A domestic regulatory framework is considered compliant with Basel III if all minimum provisions of the relevant Basel standard have been satisfied and no material differences are identified that would give rise to prudential concerns or provide a competitive advantage to internationally active banks. A domestic regulatory framework is considered largely compliant with Basel III if only minor provisions of the relevant Basel standards have not been satisfied and if differences that have only a limited impact on financial stability or the international-level playing field have been identified. A domestic regulatory framework is considered materially non-compliant with Basel III if key provisions of relevant Basel standards have not been satisfied or if differences that could materially affect financial stability or the international level playing field have been identified. A domestic regulatory framework is considered noncompliant with Basel III if the relevant Basel standards have not been adopted or if differences that could severely affect financial stability or the international playing field have been identified. minimum standards.certain regulatory capital requirements that are more stringent than the minimums set in Basel III. According to the Basel Committee, the RCAPs identified no areas to be consistently above the Basel minimums, suggesting the Basel capital standards generally are not calibrated too low in the collective judgment of the implementing authorities. For example, China and Switzerland adopted The Basel Committee also conducted jurisdictional RCAP assessments of proposed Basel III rules for the European Union and the United States, but did not assign them an overall assessment grade because of the draft nature of the rules.approaches generally complied with the vast majority of Basel III’s key components. However, the assessments also found certain components of the proposals to be materially noncompliant. Specifically, the EU RCAP noted that the proposed approach fell substantially short of the Basel framework in the definition of capital and the internal ratings-based approach for credit risk. The U.S. RCAP noted that the grade was mainly due to the proposed implementation of an alternative approach to replace Basel use of external credit ratings. The European Commission expressed concerns about RCAP’s preliminary findings—including that the assessment did not take into account that the EU is a single jurisdiction that applies Basel III to all its banks and investment firms, which necessitates that national regulators employ a certain level of proportionality in applying the rules. U.S. agencies noted that the Dodd- Frank Act prohibits the use of credit ratings for setting capital charges for securitization exposures, resulting in a deviation from the Basel III standards. They also noted that their evidence suggests that the deviation’s impact will not be material and therefore believe that their approach is largely compliant rather than materially non-compliant. The United States and EU are currently undergoing RCAP assessments based on their final Basel IIII regulations. The RCAP assessments found both proposed In a September 2013 speech, the Basel Committee’s Secretary General acknowledged the limitations of RCAP assessments, recognizing that the committee has no enforcement power beyond the power of peer pressure and public disclosure. However, the Secretary General noted that RCAPs are a strong demonstration of the Basel Committee’s commitment to international consistency and, where this cannot be perfectly achieved, to greater transparency. The Secretary General further noted that if an individual jurisdiction departed from Basel III standards, the nature and materiality of that divergence ought to be well understood. If a bank did not operate under regulations consistent with Basel standards, any difference should be much more transparent when it reported a “Basel ratio.” According to the Secretary General, in that way markets can have something of a policing role, offsetting regulatory differences in their assessment of banks’ financial ratios. Basel Committee on Banking Supervision, Regulatory Consistency Assessment Programme (RCAP): Analysis of risk-weighted Assets for Credit Risk in the Banking Book (Basel, Switzerland: July 2013), Regulatory Consistency Assessment Programme (RCAP): Second report on risk-weighted assets for market risk in the trading book (Basel, Switzerland: December 2013); and Regulatory Consistency Assessment Programme (RCAP): Analysis of Risk-weighted Assets for Market Risk (Basel, Switzerland: January 2013; revised February 2013). The trading book refers to securities that a bank would not hold to maturity and for which it accounts at current market value. The banking book refers to securities a bank plans to hold to maturity at their original book value. If the bank decides to sell the securities, it then moves the securities to the trading book, where they are given fair market value accounting treatment. Average risk-weighted assets include, for example, the three wholesale assets classes covered by the hypothetical portfolio exercise (sovereign, bank, and corporate) that account on average for about 40 percent of participating banks’ total credit risk-weighted assets. standards can lead to variations in the amount of capital banks have to hold. In that regard, the objective of the assessments generally has been to obtain a preliminary estimate of the potential for variation in risk- weighted assets across banks and highlight aspects of the Basel standards that contribute to this variation. More specifically, the RCAP examining risk-weighted assets in the banking book found that most of the variation in calculation of risk- weighted assets could be explained by broad differences in the risk composition of banks’ assets, reflecting differences in risk preferences as intended under the Basel III framework. But the RCAP also found differences in bank and supervisory practices drove a material amount of the variation. Similarly, the RCAPs examining risk-weighted assets in the trading book found considerable variation in the calculation of risk- weighted assets for market risk across banks. Supervisory decisions applied to all banks in a jurisdiction or to individual banks were deemed to be a sizeable driver of the variation, but the variation also was due to the choice of models banks used to calculate regulatory capital. According to the Basel Committee Chairman, national supervisors and banks have been using the assessments to take action where needed. Moreover, in February 2014, the Financial Stability Board reported that by the November 2014 G20 summit, the Basel Committee will prepare a plan to address excessive variability in risk-weighted asset calculations that can be implemented to improve consistency and comparability in bank capital ratios. U.S. Basel III capital regulations generally apply to both U.S. banking organizations and their foreign banking counterparts operating in the United States, helping to provide a level regulatory playing field in the U.S. market. In general, U.S. regulation of foreign banks is guided by the principle of national treatment and equality of competitive opportunity, which generally means that foreign banking entities operating in the United States should be treated no less favorably than similarly situated U.S. banking organizations and should generally be subject to the same restrictions and obligations in the United States as those that apply to the domestic operations of U.S. banking organizations. Foreign banking organizations (such as foreign parent banks) have structured their U.S. banking operations in a number of ways. For example, some conduct U.S. banking activities directly through branches or agencies, while others own U.S. banks directly. Most foreign banking organizations operate through branches and agencies, because as extensions of the foreign banking organizations, they do not have to be separately capitalized and can conduct a wide range of banking operations. Federal Reserve officials told us that they expect the U.S. Intermediate Holding Company provisions discussed below to reduce the variety of operations of foreign banking organizations in the United States. The Federal Reserve, OCC, and FDIC supervise and regulate the U.S. banking operations of foreign banking entities. The Federal Reserve is responsible for the overall supervision and regulation of the foreign banking organizations in the United States. Branches and agencies are licensed and subject to supervision by OCC or state banking agencies. Subsidiary banks of foreign banking organizations are chartered by OCC or state banking agencies, and supervised by OCC, FDIC, or the Federal Reserve or state banking agencies. Although subsidiaries that are required to abide by U.S. Basel III capital regulations may be owned or controlled by the foreign banking organization, they are separate legal entities. As shown in figure 1, Basel III capital regulations of FDIC, the Federal Reserve, and OCC generally apply to U.S. and foreign banking entities, except for foreign branches and agencies. However, some regulatory differences could arise—for example, if the subsidiaries of foreign banking entities that are required to abide by U.S. Basel III capital regulations are independent or will be required to form a U.S. intermediate holding company. Foreign banking organizations. The Federal Reserve requires foreign banking organizations with U.S. banking operations with total consolidated assets of $50 billion or more to be subject to the international Basel III capital requirements as established in their home country. These banks must certify to the Federal Reserve that they meet capital adequacy standards on a consolidated basis established by their home-country supervisor that are consistent with Basel III standards. If the home-country supervisor has not established capital adequacy standards consistent with Basel III, the foreign banking organization must demonstrate to the Federal Reserve that it would meet or exceed capital adequacy standards on a consolidated basis that are consistent with Basel III were it subject to that standard.organization fails to satisfy the Basel III requirements, the Federal Reserve may impose requirements, conditions, or restrictions, including risk-based or leverage capital requirements, on the activities or business operations of the U.S. operations of the foreign banking organization. None of FDIC’s Basel III capital rules are applicable to foreign banking organizations. Branches and agencies. Branches and agencies of foreign banking organizations are not subject to U.S. Basel III capital requirements because foreign banking organizations may operate through branches and agencies in the United States on the basis of their home-country capital standards. According to OCC, because federal branches and agencies have no segregated capital base and are only part of a foreign banking organization’s earnings stream, measurement of capital at risk is not meaningful for them. U.S. intermediate holding companies. In March 2014, the Federal Reserve finalized a rule to require larger foreign banking organizations based overseas and having material U.S. operations to establish a U.S. intermediate holding company (U.S. IHC) for consolidated supervision of their U.S. subsidiaries. According to the Federal Reserve, the requirement facilitates a level playing field between foreign and U.S. banking organizations operating in the United States, in furtherance of national treatment and competitive equity. Under the rule, U.S. IHCs would be subject to Basel III capital requirements substantially similar to those for U.S. bank holding companies. However, those U.S. IHCs that are advanced approaches banking organizations may choose to calculate their risk-weighted assets according to the standardized or advanced Conversely, U.S. bank holding approaches risk-based capital rules.companies that meet the asset threshold are automatically treated as advanced approaches banking organizations and must abide by the risk- based capital rule calculation requirements. FDIC officials told us that the agency will not regulate U.S. IHCs under FDIC’s Basel III capital rules, because U.S. IHCs are subject to the Federal Reserve’s Basel III capital rules. Subsidiaries. Subsidiaries regulated under U.S. Basel III capital regulations, regardless of whether they are subsidiaries of foreign banking organizations or domestic subsidiaries, must abide by substantially similar rules. including any Federal Reserve-regulated institution such as state member banks and top-tier bank holding companies—subject to its supervision to comply with the U.S. Basel III capital regulations. Exceptions include small bank holding companies and foreign-owned U.S. bank holding companies relying on a capital exemption (until it is eliminated).requires subsidiary institutions that fall within the defined entities subject to OCC’s Basel III capital rules to comply with the rules, regardless of whether they are a subsidiary of a foreign banking entity. Similarly, FDIC requires subsidiary institutions that fall within the defined entities subject to Basel III capital rules to comply with those rules, regardless of whether they are a subsidiary of a foreign banking entity. This includes subsidiaries regulated under U.S. Basel III capital regulations whether or not those subsidiaries will be located within a U.S. intermediate holding company. Although the Basel capital standards serve to harmonize capital regulations internationally, there are limitations to full harmonization. The Basel capital standards have no legal force; rather, the Basel Committee members developed and agreed to the standards, with the expectation that each member will implement them. According to the Basel Committee, it encourages convergence towards common standards and monitors their implementation, but does not attempt detailed harmonization of members’ supervisory approaches. The standards are minimum requirements, and members may adopt more stringent standards. Moreover, as jurisdictions amend their laws or regulations (or both) to implement the Basel III standards, they will need to fit the standards within their existing legal framework, regulatory system, or industry structure. As the Basel capital standards periodically have been revised and implemented, regulators, banks, and others have raised concerns about regulatory differences between jurisdictions and their possible competitive effects. For example, in 2007 and 2008, we reported on such concerns arising from the U.S. implementation of Basel II. Importantly, internationally active banks can be subject not only to their home-country Basel III regulations but also their host-country Basel III regulations, such as through their foreign subsidiaries. As a result, such banks may need to create systems that take into account different regulatory regimes and approaches. However, according to four G-SIBs we spoke with, all transactions completed by an internationally active bank are consolidated ultimately at the parent company for capital purposes. In turn, the parent company must calculate its capital ratios based on its home-country’s capital regulations. If Basel III regulations are more stringent in the United States than in other countries, internationally active U.S. banks could be required to hold higher levels of regulatory capital than their foreign counterparts, and banks and some law firms have noted that this could potentially put internationally active U.S. banks at a competitive disadvantage. However, regulators and some academics have noted that enhanced capital requirements could decrease systemic risk in the banking system or increase investor confidence, potentially providing banks holding relatively more capital with a competitive advantage. Although Basel III implementation is ongoing and will not be completed for years, some banks, regulators, and law firms (in publications we reviewed) have identified a number of implementation differences between jurisdictions that may create competitive disparities. Many of these differences have resulted from jurisdictions imposing more stringent requirements than the Basel III minimum standards, which could put their banking organizations at a competitive disadvantage. Basel III’s implementation is a multistep process that includes the adoption of the standards by jurisdictions through changes in law or regulations, compliance with the law or regulations by market participants, and the oversight and supervision of the laws or regulations by national regulators. To date, differences initially identified by market participants and observers have focused on the different ways that jurisdictions have adopted the standards in their regulations, including the following: Additional capital buffers: The Swiss Financial Market Supervisory Authority designed its capital regulations to impose a variable progressive capital buffer of up to 6 percent of risk-weighted assets on two Swiss G-SIBs. The overall higher capital requirements reflect the regulator’s prudential philosophy that Switzerland’s capital adequacy regulations should go beyond the international minimum standards. As UBS reported in its 2013 annual report, this requirement could harm Swiss banks when they compete against peer financial institutions subject to more lenient regulation. Similarly, the Bank of England’s Prudential Regulation Authority plans to implement a firm- specific buffer that could require certain banks to hold regulatory capital above the Basel III framework’s minimum standards. Credit valuation adjustment: Basel III included a new capital charge (the credit valuation adjustment) under which a bank must hold additional capital when entering into an over-the-counter derivatives transaction not cleared through a central counterparty. According to market participants we interviewed and law firm documents, the EU has diverged from Basel III (and the U.S. adoption of Basel III) by exempting transactions from the capital charge where such transactions are between EU-based banks and a nonfinancial corporate, sovereign, or for a limited period, pension fund. According to officials from three G-SIBs we interviewed, the exemption enables European banks to price derivative transactions to such counterparties more favorably than their non-EU competitors. Officials from one of the banks told us that the price difference could be a key factor in determining if customers transacted with U.S. versus European banks, but price was not necessarily the only factor that customers considered. Enhanced supplementary leverage ratio: U.S. banking regulators established a minimum supplementary leverage ratio of 3 percent for advanced approaches banks, consistent with Basel III. However, the regulators established an enhanced supplementary leverage ratio for top-tier bank holding companies with more than $700 billion in total consolidated assets or more than $10 trillion in assets under custody and in their subsidiary depository institutions. This enhanced ratio raised the standards above the Basel III minimum standards. Under the final rule, such subsidiary insured depository institutions must maintain an enhanced supplementary ratio of at least 6 percent to be well capitalized under the prompt corrective action framework, and the bank holding companies must maintain a buffer of more than 2 percent above the minimum supplementary leverage ratio requirement of 3 percent to avoid restrictions on capital distributions and discretionary bonus payments. According to an industry association and bank we interviewed, the leverage requirement will disadvantage such U.S. banking organizations by requiring them to maintain higher capital than their competitors in other jurisdictions. However, U.S. banking regulators view a strong regulatory capital base as a competitive strength for banking organizations, rather than a competitive weakness. Collins Amendment: In the U.S. Basel III capital regulations, federal banking regulators implemented section 171 of the Dodd-Frank Act (the Collins Amendment) which requires advanced approaches banking organizations to calculate their risk-based capital ratios under both the advanced approaches and the standardized approach (minimum risk-based capital requirements), among other capital requirements. The banks then must use the lower of each capital ratio to determine compliance with minimum capital requirements. According to a law firm’s analysis, the rule eliminates capital relief that large U.S. banks might otherwise obtain using their internal models under the advanced approaches and may provide certain internationally active foreign banks with a competitive advantage. Additionally, officials from three U.S. G-SIBs generally told us that the Collins Amendment could require them to hold more regulatory capital than their foreign competitors and put them at a competitive disadvantage. Liquidity coverage ratio: Basel III includes the liquidity coverage ratio as an internationally harmonized quantitative liquidity standard, with the goal of promoting the short-term resilience of the liquidity risk profile of internationally active banking organizations. Although at the time we spoke with the G-SIBs, U.S. banking regulators had not finalized their proposed rule implementing Basel III’s liquidity coverage ratio, officials from four G-SIBs said the proposed rule included requirements more stringent than the Basel III liquidity standards, including a narrower range of assets that qualified as high- quality liquid assets and a faster assumed rate of outflows for wholesale funding. They said the rule, as proposed, would require them to hold more liquid assets than their foreign competitors, which would be more costly for them. The U.S. banking regulators noted in the proposed rule and final rule that there were modifications to Basel III’s liquidity coverage ratio to reflect characteristics and risks of specific aspects of the U.S. market. For instance, the proposed and final rules both recognized the strong liquidity positions U.S. banking organizations already achieved and discussed the need to maintain that improved liquidity through the use of shorter transition periods than mandated in Basel III. The proposed and final rule also differed from Basel III in the method for calculating total net cash outflows; regulators described the difference as necessary because the change would allow companies to better capture their own liquidity risk. In addition to differences in Basel III regulations between jurisdictions, officials from two G-SIBs noted potential inconsistencies in the oversight and supervision of Basel III regulations. For example, such officials said that U.S. regulators have been more rigorous in their review and approval of internal models used by advanced approaches banks for calculating their risk weights under Basel II, as demonstrated by the long time it has taken these banks to pass their parallel run under the Basel II regulations. In contrast, banks in the EU, Canada, and Japan had begun implementing Basel II in 2008. At that time, the EU implementation of Basel II ahead of the United States raised competitive concerns because of the potential for EU banks to be required to hold less regulatory capital to support the same level of assets. Basel Committee and other studies indicated that U.S. banks tended to be subject to higher risk weights than EU banks, in part due to their use of the Basel I and Basel II frameworks, respectively. At the same time, comparing risk weights across banks is difficult, in part because of differences in business mix, accounting rules, off-balance sheet assets, and approaches for calculating risk-weighted assets. While differences exist in the implementation of Basel III between jurisdictions, the extent to which these differences collectively will affect competition among internationally active banks is unclear. As shown in table 7 above, Basel Committee member jurisdictions have not finished adopting regulations to implement Basel III capital, leverage, and liquidity standards. Moreover, the identified implementation differences cover multiple jurisdictions and apply to different aspects of Basel III, confounding their potential effect on competition. According to officials from two G-SIBs, it is very difficult or impossible to measure quantitatively the effect of such differences on competition. In addition to regulatory capital requirements, other factors can affect the competitive position of internationally active banks, such as differences in accounting treatment, cost of capital, tax rules, and other regulations from one country to another. For example, the spread or fee that banks charge for a financial product is a function not only of their regulatory capital requirements but also their cost of capital, which is driven by a variety of factors, and tax rates. Additionally, officials from three G-SIBs told us the Dodd-Frank Act and other reforms could affect the extent to which Basel III can help provide a level playing field. For example, they said that their minimum regulatory capital ratios effectively are set under the annual CCAR, which has resulted in regulatory capital ratios higher than the Basel III minimum ratios. They also said that the EU has been adopting similar stress tests for EU banks. We provided a draft of this report to FDIC, Federal Reserve, and OCC for their review and comment. FDIC, the Federal Reserve, and OCC provided technical comments that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Chairman of FDIC, Chairman of the Federal Reserve, and Comptroller of OCC. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report examines how (1) U.S. Basel III capital may affect U.S. banking organizations, including smaller banking organizations, and (2) implementation of Basel III’s capital and other standards by different jurisdictions may affect the ability of U.S. banking organizations to compete internationally. To assess how the U.S. Basel III capital regulation may impact U.S. banking organizations, including smaller organizations, we used data from the Consolidated Financial Statements for Holding Companies– Form FR Y-9C (Y-9C) and from the Consolidated Reports of Condition and Income (Call Reports) as of March 31, 2014, to estimate (1) the number of bank holding companies and depository institutions with capital ratios greater than or equal to Basel III minimum capital ratios; (2) the amount of capital bank holding companies and depository institutions would need to meet the U.S. Basel III minimum capital requirements; and (3) the change in funding costs for bank holding companies and depository institutions associated with the amount of capital they would need to meet the minimum capital requirements. We assessed the reliability of the data from the Y-9Cs and Call Reports for these purposes by reviewing relevant documentation and by electronically testing the data for missing or incorrect values and for outliers. For more information on our methodology, our results, and the limitations of our analysis, see appendix II. To understand how the higher capital requirements might affect the cost and availability of credit in terms of three outcomes—the cost of capital to banks, the interest rate paid by borrowers, and the quantity of loans by banks, we conducted a literature survey of recent economic studies that examined the effect of higher capital requirements on these three outcomes. To identify relevant empirical studies, we conducted searches of two databases, (ProQuest and EconLit) and identified and selected economic studies from peer-reviewed journals and working papers from governmental institutions that were published from 2011 through 2014. We used search terms for selecting the studies, such as interest rate spread, credit availability, cost of capital, and partial equilibrium. For articles with abstracts, two team members independently reviewed each abstract to determine if the article addressed the previously identified topics and appeared to contain empirical data. If both reviewers agreed that the article was relevant, it was saved for further review. When reviewers disagreed, a third team member reviewed the abstract and made the final decision. The selected studies then were evaluated to determine if the methods were appropriate or sufficiently rigorous. A GAO economist performed a secondary review and confirmed that the methods met our criteria for methodological quality and were sufficiently rigorous to assess estimates of the cost and availability of capital. Based on our selection criteria, we identified 11 studies. One analyst then performed an in-depth review of the findings and summarized the research in a data collection instrument that captured the title, authors, outcomes of interest and key findings. A GAO economist performed a secondary review and confirmed our reported understanding of the findings. For a complete list of the studies, see the Bibliography. To assess the short-run impact on the cost and availability of credit of bank holding companies or depository institutions raising capital, we used estimates of changes in loan volumes and loan spreads associated with changes in capital from our prior work together with our estimates of the capital shortfall described above. To assess the long-run impact, we used an existing loan pricing model together with our estimates of the changes in funding costs described above. In addition, we judgmentally selected eight community banks based on their total assets and geographic locations and interviewed them to obtain their views on the impact of the U.S. Basel III capital regulations on their compliance costs and credit availability. We defined a community bank as a subsidiary bank with $10 billion or less in assets as of December 31, 2013. Although no commonly accepted definition of a community bank exists, this size-based definition has been used by the Board of Governors of the Federal Reserve System (Federal Reserve). Using the SNL database, we developed a list of 5,849 subsidiary banks with assets of less than $10 billon. We then placed the community banks into one of the following four asset categories: (1) $1 to less than $500 million, (2) $500 million to less than $1billion, (3) $1 billion to less than $5 billion, and (4) $5 billion to less than $10 billion. Based on the U.S. Census classification, we further placed the community banks into one of the following four regions (1) East, (2) Midwest, (3) South, and (4) West. Within the categories of region and asset size, we randomly selected 10 banks. We assumed that a sample with a mix of different bank sizes and geographic areas would provide a wide range of views and experiences. Nonetheless, the information collected from this sample of banks cannot be generalized to the larger population of all community banks. To ensure that we captured the views of banks that are most prevalent in this population (banks with smaller asset sizes) as well those from asset categories that have a larger share of total assets (banks with larger asset sizes), we attempted to select at least four banks from the lower two asset categories and four banks from the upper two asset categories. We also attempted to include in our sample at least two from each region while allowing for an additional bank in two regions with a larger number of community banks. In three cases, we were unable to make contact with the sampled bank, so we randomly selected a substitute from the same region and asset category. One bank merged with another, but we retained the merged bank for our sample since it was in the same region and asset category as the bank we originally selected. In two cases, we were unable to make contact with or gain the participation from the originally selected banks or with multiple numbers of randomly selected substitute banks. As a result, the final sample consists of eight banks with only one bank in the East. To determine the extent to which jurisdictional differences in implementation of Basel III’s standards may affect how various U.S. banking organizations compete, we judgmentally selected and interviewed 10 global systemically important banks (G-SIBs) operating in the United States, European Union, and Japan to obtain their views on the competitive differences resulting from implementation of the Basel III framework across jurisdictions. To better understand the connection between international competition and jurisdictional differences in implementation we reviewed law firm legal briefs and client documents, the academic literature on the role capital plays in bank competition, publicly available consulting firm documents, and annual reports and filings issued by publicly trading banking organizations. We also reviewed prior GAO reports and studies on competition issued by the banking regulators to examine historical connections between the regulatory environment an entity faces and its ability to compete internationally. We reviewed the European Union’s Capital Requirements Directive IV and the United Kingdom’s Prudential Regulation Authority Consultation Paper: Strengthening capital standards: implementing CRD IV (August 2013). For both objectives, we reviewed banking regulations for the U.S. Basel III capital standards, the supplementary leverage ratio, enhanced supplementary leverage ratio and the liquidity coverage ratio. We also reviewed prior GAO reports, studies on the Basel III framework and regulatory reform issued by Federal Deposit Insurance Corporation, Federal Reserve, and Office of the Comptroller of the Currency, and law firms, and annual reports and filings issued by publicly traded banking organizations. We also interviewed officials from six industry associations representing U.S. or foreign banks (or both) operating in the United States. We conducted this performance audit from December 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To assess how the U.S. Basel III capital regulation may impact U.S. banking organizations, including smaller organizations, we used data from the Consolidated Financial Statements for Holding Companies— Form FR Y-9C (Y-9Cs) and from the Consolidated Reports of Condition and Income (Call Reports) as of March 31, 2014, to estimate (1) the number of bank holding companies and depository institutions with capital ratios that are greater than or equal to Basel III minimum capital ratios; (2) the amount of capital bank holding companies and depository institutions would need to take actions to meet the U.S. Basel III minimum capital requirements; and (3) the change in funding costs for bank holding companies and depository institutions associated with the amount of capital they would need to meet the minimum capital standards. We assessed the reliability of the data from the Y-9Cs and Call Reports for these purposes by reviewing relevant documentation and electronically testing the data for missing or incorrect values and outliers. We discussed the results of our analyses in the report’s body, but we presented only those estimates that combined the minimum capital ratios with the capital conservation buffer. The tables below present our results in more detail. To estimate the number of holding companies and depository institutions with capital ratios that are greater than or equal to Basel III minimum capital ratios, we estimated the amounts of common equity tier 1 capital, additional tier 1 capital, tier 1 capital, tier 2 capital, and total capital (collectively, capital) and risk-weighted assets using the calculations described in Schedule HC-R Parts I.B and II of the Y-9C along with the instructions to these parts of the Y-9C. The amounts of some balance sheet and income statement items used to calculate the amount of capital or the amount of risk-weighted assets cannot be observed for bank holding companies or depository institutions that are not subject to, or that do not elect to use, the advanced approaches rule. We made assumptions about these unobservable amounts that are similar to assumptions made by the Federal Reserve Board and for a comparable analysis. We separated bank holding companies and depository institutions into groups based on their size measured in total assets and on their status as an advanced approaches holding company (for bank holding companies) or a subsidiary of an advanced approaches holding company (for depository institutions). The groups of bank holding companies we analyzed are those with $500 million to less than $1 billion in assets, $1 billion to less than $10 billion in assets, $10 billion to less than $50 billion in assets, $50 billion or more in assets but not using the advanced approaches $50 billion or more in assets and using the advanced approaches. The groups of depository institutions we analyzed are those with less than $1 billion in assets, $1 billion to less than $10 billion in assets, $10 billion to less than $50 billion in assets, $50 billion or more in assets, and those that are subsidiaries of advanced approaches holding companies (regardless of their size). We used our estimates of the amounts of capital and risk-weighted assets to estimate the ratios of common equity tier 1 capital to risk-weighted assets, tier 1 capital to risk-weighted assets, total capital to risk-weighted assets, and tier 1 capital to average assets for each bank holding company and depository institution. We then compared the estimated capital ratios to the Basel III minimum capital ratios, both with and without the capital conservation buffer, and counted the numbers of bank holding companies and depository institutions with estimated capital ratios that met and did not meet the Basel III minimum capital ratios. To estimate the amount of capital bank holding companies and depository institutions would need to raise to meet U.S. Basel III capital requirements, we calculated the amount of capital required to meet the U.S. Basel III minimum (the capital shortfall), in billions of dollars and as a percentage of total assets for each capital ratio, for bank holding companies and depository institutions with capital ratios less than the Basel III minimums. For each capital ratio, we then calculated the median capital shortfall for bank holding companies and depository institutions with insufficient capital relative to the Basel III minimums. For each capital ratio, we also calculated the total capital shortfall for all bank holding companies and depository institutions with insufficient capital in billions of dollars and as a percentage of the total assets of all bank holding companies and depository institutions we analyzed. To estimate the change in funding costs for bank holding companies and depository institutions that need to raise capital, we first estimated the change in funding cost per dollar of assets associated with a 1 percentage point increase in the ratio of equity capital to assets by calculating the difference between return on equity and the after-tax interest rate on debt. We used the median return on equity (net income as a percentage of equity capital) and the median interest rate on debt (interest expense as a percentage of interest-bearing liabilities) for each group of bank holding companies and depository institutions for the first quarter of 2014, and we assumed that the marginal corporate income tax rate equaled 35 percent. For bank holding companies and depository institutions of different sizes and different status (as advanced approaches holding companies or subsidiaries of advanced approaches holding companies) and with capital ratios less than the Basel III minimums, we estimated the median change in funding cost associated with raising capital sufficient to meet the Basel III minimums by multiplying the median capital shortfall as a percentage of assets by the estimated change in funding cost. Our estimates of the numbers of bank holding companies and depository institutions with capital ratios exceeding Basel III minimums and of the capital shortfall are subject to limitations. Most importantly, the amounts of some balance sheet and income statement items used to calculate the amount of capital or the amount of risk-weighted assets cannot be observed for bank holding companies or depository institutions that are not subject to or that do not elect to use the advanced approaches rule. We made assumptions about these unobservable amounts that are similar to assumptions the Board of Governors of the Federal Reserve However, we cannot assess System made for a comparable analysis.the extent to which our estimates overstate or understate the numbers of bank holding companies and depository institutions that already met Basel III capital standards or the capital shortfall. In addition, some bank holding companies and depository institutions may want to maintain a capital in excess of the regulatory minimum levels to satisfy investors or other market participants. In this case, our estimates likely understate the number of bank holding companies and depository institutions that will raise capital and also understate the amount of capital raised. Our estimates of the increase in funding costs associated with raising capital also are subject to several limitations. First, as we discuss above, our estimates of the capital shortfall are subject to limitations and may overstate or understate the amount of capital that bank holding companies and depository institutions raise in response to the new Basel III standards. Because the increase in funding costs is related to the size of the capital shortfall, our estimates of the increase may be overstated or understated. In particular, some bank holding companies or depository institutions may maintain capital in excess of the minimum requirements (a capital buffer). The larger the capital buffer, the more funding costs will increase and the more our estimates will understate them. In addition, our estimates reflect the median amounts of capital required by bank holding companies and depository institutions we estimated to have insufficient capital to meet Basel III standards, may not reflect the specific circumstances of an individual bank holding company or depository institution that may need to raise capital, and may overstate or understate the change in its funding cost. Furthermore, our estimates reflect the median return on equity and interest rate on debt that prevailed in the first quarter of 2014, as well as our assumption of a corporate income tax rate of 35 percent. However, equity returns, debt interest rates, and tax rates may change, altering the relative prices of debt and equity and thus altering the change in funding costs associated with substituting equity for debt. Finally, our estimates assume that the return on equity will not change when a bank holding company or depository institution increases its capital ratio. However, increasing reliance on equity funding reduces the risks to investors, all else being equal. If a bank holding company or depository institution increased its ratio of capital to assets, then the return on its equity could fall as investors demanded less of a risk premium. We used our estimates of the amounts of capital and risk-weighted assets to estimate the ratios of common equity tier 1 capital to risk-weighted assets, tier 1 capital to risk-weighted assets, total capital to risk-weighted assets, and tier 1 capital to average assets for each bank holding company and depository institution. We then compared the estimated capital ratios to the Basel III minimum capital ratios, with and without the capital conservation buffer, and counted the numbers of bank holding companies and depository institutions with estimated capital ratios that met and did not meet the Basel III minimum capital ratios. Our estimates are presented in table 8. For bank holding companies and depository institutions with capital ratios less than the Basel III minimums, we calculated the amount of capital required to meet the Basel III minimums (the capital shortfall), both in billions of dollars and as a percentage of total assets. For each capital ratio, we then calculated the median capital shortfall for bank holding companies and depository institutions with insufficient capital relative to the Basel III minimums. Our estimates are presented in table 9. For each capital ratio, we also calculated the total capital shortfall for all bank holding companies and depository institutions with insufficient capital in billions of dollars and as a percentage of the total assets of all the bank holding companies and depository institutions we analyzed. Our estimates are presented in table 10. For bank holding companies and depository institutions of different sizes and different status as advanced approaches holding companies or subsidiaries of advanced approaches holding companies, we estimated the change in funding cost per dollar of assets associated with a 1 percentage point increase in the ratio of equity capital to assets. Funding costs are determined by the prices of equity and debt financing sources and the amounts used of each. Because interest payments on debt are tax-deductible, a more leveraged capital structure reduces corporate taxes, lowering funding costs. Thus, an increase in the required amount of equity capital would increase a bank’s cost of capital. The increased funding cost associated with a 1 percentage point increase in the capital ratio of a bank holding company or depository institution is approximately equal to the difference between the return on equity and the after-tax interest rate on debt, all else being equal. We used the median return on equity (net income as a percentage of equity capital) and the median interest rate on debt (interest expense as a percentage of interest-bearing liabilities) for each group of bank holding companies and depository institutions for the first quarter of 2014, and we assumed that the marginal corporate income tax rate is equal to 35 percent. Our estimates are presented in table 11. For bank holding companies and depository institutions of different sizes and different status as advanced approaches holding companies or subsidiaries of advanced approaches holding companies and with capital ratios less than the Basel III minimums, we estimated the median change in funding cost associated with raising capital sufficient to meet the Basel III minimums by multiplying the median capital shortfall as a percentage of assets by the estimated change in funding cost. Our estimates are presented in table 12. In addition to the contact name above, Richard Tsuhara (Assistant Director), Nancy Eibeck (Analyst-in-Charge), Jessica Artis, Chloe F. Brown, Pamela R. Davidson, Donald P. Hirasuna, Courtney L. LaFountain, Jon D. Menaster, Marc W. Molino, Barbara M. Roesmann, and Jessica M. Sandler made significant contributions to this report. We reviewed 11 recent empirical studies (published from 2011 through 2014) that examined how higher capital standards might affect the cost and availability of credit in terms of three outcomes—the cost of capital to banks, the interest rate paid by borrowers, and the quantity of loans by banks. Agenor, Pierre-Richard, Koray, Alper, and Luiz Pereira da Silva. “Capital Regulation, Monetary Policy, and Financial Stability.” International Journal of Central Banking, vol. 9, no. 3 (September 2013). Baker, Malcolm, and Jeffrey Wurgler. “Do Strict Capital Requirements Raise the Cost of Capital? Banking Regulation and the Low Risk Anomaly.” National Bureau of Economic Research working paper 19018 (May 2013). Cosimano, Thomas F., and Dalia S. Hakura. “Bank Behavior in Response to Basel III: A Cross-Country Analysis.” International Monetary Fund working paper 11-119 (May 2011). Corbae, Dean, and Pablo D’Erasmo. “Capital Requirements in a Quantitative Model of Banking Industry Dynamics”. Federal Reserve Bank of Philadelphia working paper 14-13 (April 2014). Gauthier, Céline, Alfred Lehar, and Moez Souissi. “Macroprudential Capital Requirements and Systemic Risk.” Journal of Financial Intermediation, vol. 21, no. 4 (October 2012). Martın-Oliver, Alfredo, Sonia Ruano, and Vicente Salas-Fumas. “Banks’ Equity Capital Frictions, Capital Ratios, and Interest Rates: Evidence from Spanish Banks.” International Journal of Central Banking, vol. 9, no. 1 (March 2013). Paries, Matthieu Darracq, Christoffer Kok Sørensen, and Diego Rodriguez-Palenzuela. “Macroeconomic Propagation under Different Regulatory Regimes: Evidence from an Estimated DSGE Model for the Euro Area.” International Journal of Central Banking, vol. 7, no. 4 (December 2011). Roger, Scott, and Francis Vitek. “The Global Macroeconomic Costs of Raising Bank Capital Adequacy Requirements.” International Monetary Fund working paper 12-44 (February 2012). Slovik, Patrick, and Boris Cournède. “Macroeconomic Impact of Basel III.” Organisation for Economic Co-operation and Development Economics Department working paper 844 (February 14, 2011). Šutorova, Barbora, and Petr Teply. “The Impact of Basel III on Lending Rates of EU Banks.” Czech Journal of Economics and Finance, vol. 63, no. 3 (2013). Yan, Meilan, Maximilian J.B. Hall, and Paul Turner. “A Cost–Benefit Analysis of Basel III: Some Evidence from the UK.” International Review of Financial Analysis, vol. 25 (December 2012).
|
The 2007-2009 financial crisis revealed that many U.S. and international banks lacked capital of sufficient quality and quantity to absorb substantial losses. In 2010, the Basel Committee (the global standard-setter for prudential bank regulation) issued the Basel III framework—comprehensive reforms to strengthen global capital and liquidity standards with the goal of promoting a more resilient banking sector. In 2013, federal banking regulators adopted regulations to implement the Basel III-based capital standards in the United States, which generally apply to U.S. bank holding companies and banks and are being phased in through 2019. Some market participants have raised questions about the potential negative impact of the regulations on U.S. banks, including on their lending and competitiveness. This report examines how (1) the U.S. Basel III regulations may affect U.S. banks, including smaller ones, and (2) implementation of Basel III by different countries and other jurisdictions may affect U.S. banking organizations' international competitiveness. To address the objectives, GAO analyzed data from financial filings; conducted legal and economic analysis; reviewed empirical studies, federal regulations, and agency documents; and interviewed regulators, U.S. and foreign banks, and industry associations. GAO makes no recommendations in this report. GAO provided a draft of this report to the banking regulators for their review and comment and received technical comments, which were incorporated as appropriate. Although the U.S. Basel III capital requirements may increase compliance costs, they likely will have a modest impact on lending activity as most banks may not need to raise additional capital to meet the minimum requirements. GAO's analyses of financial data for the first quarter of 2014 indicate the vast majority of bank holding companies and banks currently meet the new minimum capital ratios and capital conservation buffer (an additional capital requirement) at the fully phased-in levels required by 2019. GAO estimated that less than 10 percent of the bank holding companies collectively would need to raise less than $5 billion in total additional capital to cover the capital shortfall. Banks with a shortfall tended to be small, with less than $1 billion in assets. The empirical research GAO reviewed suggests that higher regulatory capital requirements will have a modest effect on the cost and availability of credit. Similarly, GAO's economic analysis indicates that raising the additional capital would lead to a modest decline in lending and a modest increase in loan rates. According to officials from the eight community banks GAO interviewed, they do not anticipate any difficulties meeting the capital requirements but expect to incur additional compliance costs. Officials from the 10 global systemically important banks that GAO interviewed said they have been incurring significant costs to comply with the new requirements, but three said that U.S. minimum capital ratios for Basel III tend not to be the binding capital constraint. Most of these bank officials said they expect the requirements to improve the resilience of the banking system. Jurisdictional differences in the implementation of the Basel III capital standards have arisen, but their competitive effect on internationally active banks is unclear. Basel III serves, in part, to limit competitive disparities due to differences in capital standards, but there are limitations to full harmonization. For example, the Basel capital standards have no legal force; rather, members of the Basel Committee on Banking Supervision (Basel Committee) developed and agreed to the standards, with the expectation that each member will implement them. Thus, jurisdictions may adopt requirements more or less stringent than the minimum standards. Almost all Basel Committee members report having adopted rules to implement the Basel III capital requirements. To help promote a level regulatory playing field, the Basel Committee began conducting reviews in 2012 to assess whether each member's implementation meets the Basel III minimum standards and whether implementation produced consistent outcomes across jurisdictions. These reviews found the rules of the seven members it assessed to date to be generally compliant. However, the Basel Committee's other reviews identified some inconsistencies in how banks across different jurisdictions calculated their risk-weighted assets. As was the case with Basel II implementation, some banks and others are concerned about jurisdictional differences in the implementation of Basel III and their effect on competition. For example, some jurisdictions are subjecting certain of their banks to capital or leverage requirements above the Basel III minimums or exempting banks from certain capital requirements. Because Basel III's implementation is ongoing, the extent to which the differences collectively will affect competition among internationally active banks is unclear. In addition, other factors can affect the competitive position of internationally active banks, such as differences in accounting treatment, cost of capital, and tax rules across jurisdictions.
|
More than a year after TRIA’s enactment, Treasury and insurance industry participants have made progress in implementing and complying with its provisions, although Treasury has yet to fully implement the 3-year program. Treasury has issued regulations (final rules) to guide insurance market participants, fully staffed the TRIP office, and begun collecting data and performing studies mandated by TRIA. For example, Treasury complied with a mandate to collect and assess data on the availability of group life insurance and reinsurance; based on that data, Treasury determined that group life would not be covered by TRIA. However, Treasury has yet to make the claims payment function fully operational, although it has recently hired contractors to perform claims payment functions. Moreover, even though the act does not require Treasury to make a decision about whether to extend the “make available” requirement through 2005 until September of this year, some insurers expressed concerns about whether such a late decision would allow them sufficient time to make and implement changes to policy rates and terms. Additionally, insurers have voiced concerns about the time Treasury might take to certify an act of terrorism as eligible for reimbursement under TRIA and pay claims after an act was certified. Finally, as TRIA’s midpoint nears, many insurers and other market participants are concerned whether TRIA will be extended or not and the timing of such a decision. To implement TRIA and make TRIP functional, Treasury has taken numerous regulatory and administrative actions that include rulemaking, staffing a program office, and collecting and analyzing data. To date, Treasury has issued several final and proposed rules to implement TRIA; these rules were preceded by four sets of interim guidance issued between December 2002 and March 2003 to address time-sensitive requirements. As of March 1, 2004, Treasury had issued three final rules that provided uniform definitions of TRIA terms, explained disclosure (that is, notification to policyholder) requirements, and determined which insurers were subject to TRIA. Currently, Treasury is soliciting public comments on additional proposed rules addressing claims processes and litigation management issues. Also, as of September 2003 Treasury had fully staffed the TRIP office. The office develops and oversees the operational aspects of TRIA, which encompass claims management—processing, review, and payment—and auditing functions. Staff will also oversee operations performed by the contractors that actually pay claims and audit insurers that have filed claims. Additionally, TRIP staff perform ongoing work such as issuing interpretive letters in response to questions submitted by the public and educating regulators, industry participants and the public about TRIA provisions. Treasury completed a TRIA-mandated study on group life insurance and has begun other mandated studies and data collection efforts. Specifically, TRIA mandated that Treasury provide information to Congress in four areas: (1) the effects of terrorism on the availability of group life insurance, (2) the effects of terrorism on the availability of life and other lines of insurance, (3) annual data on premium rates, and (4) the effectiveness of TRIA. After Treasury completed an assessment of the availability of group life insurance and reinsurance, it decided not to make group life insurance subject to TRIA because it found that insurers had continued to provide group life coverage, although the availability of reinsurance was reduced. Treasury has not yet reported to Congress the results of a mandated study concerning the effects of terrorism on the availability of life and other lines of insurance. The study was to have been completed by August 2003, but as of March 2004 the report had not been issued. Also, in November 2003 and January 2004, Treasury began sending surveys to buyers and sellers, respectively, of insurance to collect data on annual premium rates as well as other information for the study that will assess the effectiveness of TRIA. Before TRIA will be fully implemented, Treasury has to make certain decisions and make additional TRIP functions operational. As of April 2004, Treasury had not yet decided whether to extend the “make available” requirement to policies issued or renewed in 2005. TRIA gave Treasury until September 1, 2004, to decide if the “make available” requirement should be extended for policies issued or renewed in 2005, the third and final year of the act. Treasury did clarify in a press release that the “make available” requirement for annual policies issued or renewed in 2004 extends until the policy expiration date, even though the coverage period extends into 2005. In addition, Treasury has not fully established a claims processing and payment structure. Treasury has issued a proposed rule that would establish an initial framework for the claims process, which includes procedural and recordkeeping requirements for insurers. However, the actual claims processing and payment function is not fully operational. A Treasury official said it has recently hired a contractor that would perform payment functions in the aftermath of a terrorist attack, but has not yet written regulations to cover the latter stages of the claims process such as adjusting over- and underpayments or hired a separate contractor to review claims and audit insurers after an event to ensure that underlying documents adequately support the claims paid by Treasury. Treasury officials anticipate awarding this audit and review contract in the fourth quarter of fiscal year 2004. Insurers have expressed some concerns about Treasury’s implementation of TRIA. Insurers are concerned that Treasury has not already made a decision about extending the “make available” requirement through 2005. They are also concerned about the potential length of time it may take for the Secretary of the Treasury to certify a terrorist event, potential inefficiencies and time lags in processing and paying claims once an event is certified, and the issue of TRIA expiration. As discussed already, TRIA gives Treasury until September 2004 to make a decision about the “make available” requirement for policies issued or renewed in 2005. Insurers have stated that this deadline does not give them enough time to make underwriting decisions and evaluate and possibly revise prices and terms, actions they normally would want to undertake in mid-2004. Moreover, in most states insurers will have to obtain regulatory approval for such changes because TRIA’s preemption of the states’ authority to approve insurance policy rates and conditions expired on December 31, 2003. Thus, insurers are concerned that delay of Treasury’s announcement on the “make available” extension until the legal deadline may cost both companies and policyholders money because policy changes will not be implemented in time to issue or renew policies. Insurers are also concerned that delays in the payment of claims by Treasury, whether because of the length of time taken to certify that an act of terrorism met the requirements for federal reimbursement or to process and pay claims, might seriously impact insurer cash flows or, in certain circumstances, solvency. While TRIA does not specify the length of time available for determining whether an event meets the criteria for certification, an NAIC official told us that insurers are bound by law and regulations in most states to pay claims in a timely manner. As a result, an insurer may have to pay policyholder claims in full while still awaiting a certification decision, which could create a cash flow problem for insurers. Insurers identified the anthrax letter incidents as an example where law enforcement officials still have not identified the source, whether foreign or domestic, more than 2 years after the incidents. Moreover, if Treasury decided not to certify an event after insurers had already paid policyholder claims, some insurers could become insolvent. Unless the policyholder had paid for coverage of all terrorist events— including those caused by domestic terrorists, which would be excluded from reimbursement under TRIA—insurers would have paid for losses for which they had collected no premium. An NAIC official explained that insurers would have no way to recover payments already made to policyholders for losses associated with the event other than to seek remedies through the courts. Treasury officials have said that they understand the difficulties facing insurers but cannot impose a time frame on the certification process because it could involve complex fact-finding processes. To facilitate the certification process, Treasury has met with relevant individuals within the Department of Justice and the Department of State to discuss their roles in the certification process. Insurers are similarly concerned that the length of time Treasury may take to process and pay claims could impact insurers’ cash flow. In response to this concern, Treasury has decided to use electronic fund transfers to insurer’s accounts to speed reimbursement to insurers with approved claims. Treasury expects this method could speed payment of claims and reduce potential cash flow problems for insurers. Finally, insurance industry officials are worried that uncertainty about the extension of TRIA past its stated expiration date of December 2005 would impede their business and planning processes. Although TRIA does not contain any specific extension provisions, industry participants are concerned that a late decision on whether or not to extend TRIA would deny them the time needed to tailor business operations and plans to an insurance environment that either would or would not contain TRIA. While TRIA has improved the availability of terrorism insurance, particularly for high-risk properties in major metropolitan areas, most commercial policyholders are not buying the coverage. Limited industry data suggest that 10–30 percent of commercial policyholders are purchasing terrorism insurance, perhaps because most policyholders perceive themselves at relatively low risk for a terrorist event. Some industry experts are concerned that those most at risk from terrorism are generally the ones buying terrorism insurance. In combination with low purchase rates, these conditions could result in uninsured losses for those businesses without terrorism coverage or cause financial problems for insurers, should a terrorist event occur. Moreover, even policyholders who have purchased terrorism insurance may remain uninsured for significant risks arising from certified terrorist events—that is, those meeting statutory criteria for reimbursement under TRIA—such as those involving NBC agents or radioactive contamination. Finally, although insurers and some reinsurers have cautiously reentered the terrorism risk market, insurance industry participants have made little progress toward developing a mechanism that could permit the commercial insurance market to resume providing terrorism coverage without a government backstop. TRIA has improved the availability of terrorism insurance, especially for some high-risk policyholders. According to insurance and risk management experts, these were the policyholders who had difficulty finding coverage before TRIA. TRIA requires that insurers “make available” coverage for terrorism on terms not differing materially from other coverage. Largely because of this requirement, terrorism insurance has been widely available, even for development projects in high-risk areas of the country. Although industry data on policyholder characteristics are limited and cannot be generalized to all policyholders in the United States, risk management and real estate representatives generally agree that after TRIA was passed, policyholders—including borrowers obtaining mortgages for “trophy” properties, owners and developers of high-risk properties in major city centers, and those in or near “trophy” properties— were able to purchase terrorism insurance. Additionally, TRIA contributed to better credit ratings for some commercial mortgage-backed securities. For example, prior to TRIA’s passage, the credit ratings of certain mortgage-backed securities, in which the underlying collateral consisted of a single high-risk commercial property, were downgraded because the property lacked or had inadequate terrorism insurance. The credit ratings for other types of mortgage-backed securities, in which the underlying assets were pools of many types of commercial properties, were also downgraded but not to the same extent because the number and variety of properties in the pool diversified their risk of terrorism. Because TRIA made terrorism insurance available for the underlying assets, thus reducing the risk of losses from terrorist events, it improved the overall credit ratings of mortgage-backed securities, particularly single-asset mortgage-backed securities. Credit ratings affect investment decisions that revolve around factors such as interest rates because higher credit ratings result in lower costs of capital. According to an industry expert, investors use credit ratings as guidance when evaluating the risk of mortgage-backed securities for investment purposes. Higher credit ratings reflect lower credit risks. The typical investor response to lower credit risks is to accept lower returns, thereby reducing the cost of capital, which translates into lower interest rates for the borrower. To the extent that the widespread availability of terrorism insurance is a result of TRIA’s “make available” requirement, Treasury’s decision on whether to extend the requirement to year three of the program is vitally important. While TRIA has ensured the availability of terrorism insurance, we have little quantitative information on the prices charged for this insurance. Treasury is engaged in gathering data through surveys that should provide useful information about terrorism insurance prices. TRIA requires that they make the information available to Congress upon request. In addition, TRIA also requires Treasury to assess the effectiveness of the act and evaluate the capacity of the industry to offer terrorism insurance after its expiration. This report is to be delivered to Congress no later than June 30, 2005. Although TRIA improved the availability of terrorism insurance, relatively few policyholders have purchased terrorism coverage. We testified previously that prior to September 11, 2001, policyholders enjoyed “free” coverage for terrorism risks because insurers believed that this risk was so low that they provided the coverage without additional premiums as part of the policyholder’s general property insurance policy. After September 11, prices for coverage increased rapidly and, in some cases, insurance became very difficult to find at any price. Although a purpose of TRIA is to make terrorism insurance available and affordable, the act does not specify a price structure. However, experts in the insurance industry generally agree that after the passage of TRIA, low-risk policyholders (for example, those not in major urban centers) received relatively low-priced offers for terrorism insurance compared to high-risk policyholders, and some policyholders received terrorism coverage without additional premium charges. Yet according to insurance experts, despite low premiums, many businesses (especially those not in “target” localities or industries) did not buy terrorism insurance. Some simply may not have perceived themselves at risk from terrorist events and considered terrorism insurance, even at low premiums (relative to high-risk areas), a bad investment. According to insurance sources, other policyholders may have deferred their decision to buy terrorism insurance until their policy renewal date. Some industry experts have voiced concerns that low purchase rates may indicate adverse selection—where those at the most risk from terrorism are generally the only ones buying terrorism insurance. Although industry surveys are limited in their scope and not appropriate for marketwide projections, the surveys are consistent with each other in finding low “take-up” rates, the percentage of policyholders buying terrorism insurance, ranging from 10 to 30 percent. According to one industry survey, the highest take-up rates have occurred in the Northeast, where premiums were generally higher than the rest of the country. The combination of low take-up rates and high concentration of purchases in an area thought to be most at risk raises concerns that, depending on its location, a terrorist event could have additional negative effects. If a terrorist event took place in a location not thought to be a terrorist “target,” where most businesses had chosen not to purchase terrorism insurance, then businesses would receive little funding from insurance claims for business recovery efforts, with consequent negative effects on owners, employers, suppliers, and customers. Alternatively, if the terrorist event took place in a location deemed to be a “target,” where most businesses had purchased terrorism insurance, then adverse selection could result in significant financial problems for insurers. A small customer base of geographically concentrated, high-risk policyholders could leave insurers unable to cover potential losses, facing possible insolvency. If, however, a higher percentage of business owners had chosen to buy the coverage, the increased number of policyholders would have reduced the chance that losses in any one geographic location would create a significant financial problem for an insurer. Since September 11, 2001, the insurance industry has moved to tighten long-standing exclusions from coverage for losses resulting from NBC attacks and radiation contamination. As a result of these exclusions and the actions of a growing number of state legislatures to exclude losses from fire following a terrorist attack, even those policyholders who choose to buy terrorism insurance may be exposed to potentially significant losses. Although NBC coverage was generally not available before September 11, after that event insurers and reinsurers recognized the enormity of potential losses from terrorist events and introduced new practices and tightened policy language to further limit as much of their loss exposures as possible. (We discuss some of these practices and exclusions in more detail in the next section.) State regulators and legislatures have approved these exclusions, allowing insurers to restrict the terms and conditions of coverage for these perils. Moreover, because TRIA’s “make available” requirements state that terms for terrorism coverage be similar to those offered for other types of policies, insurers may choose to exclude the perils from terrorism coverage just as they have in other types of coverage. According to Treasury officials, TRIA does not preclude Treasury from providing reimbursement for NBC events, if insurers offered this coverage. However, policyholder losses from perils excluded from coverage, such as NBCs, would not be “insured losses” as defined by TRIA and would not be covered even in the event of a certified terrorist attack. In an increasing number of states, policyholders may not be able to recover losses from fire following a terrorist event if the coverage in those states is not purchased as part of the offered terrorism coverage. We have previously reported that approximately 30 states had laws requiring coverage for “fire-following” an event—known as the standard fire policy (SFP)—irrespective of the fire’s cause. Therefore, in SFP states fire following a terrorist event is covered whether there is insurance coverage for terrorism or not. After September 11, some legislatures in SFP states amended their laws to allow the exclusion of fire following a terrorist event from coverage. As of March 1, 2004, 7 of the 30 SFP states had amended their laws to allow for the exclusion of acts of terrorism from statutory coverage requirements. However as discussed previously, the “make available” provision requires coverage terms offered for terrorist events to be similar to coverage for other events. Treasury officials explained that in all non-SFP states, and the seven states with modified SFPs, insurers must include in their offer of terrorism insurance coverage for fire following a certified terrorist event because coverage for fire is part of the property coverage for all other risks. Thus, policyholders who have accepted the offer would be covered for fire following a terrorist event, even though their state allows exclusion of the coverage. However, policyholders who have rejected their offer of coverage for terrorism insurance would not be covered for fire following a terrorist event. According to insurance experts, losses from fire damage can be a relatively large proportion of the total property loss. As a result, excluding terrorist events from SFP requirements could result in potentially large losses that cannot be recovered if the policyholder did not purchase terrorism coverage. For example, following the 1994 Northridge earthquake in California, total insured losses for the earthquake were $15 billion—$12.5 billion of which were for fire damage. According to an insurance expert, policyholders were able to recover losses from fire damage because California is an SFP state, even though most policies had excluded coverage for earthquakes. Under TRIA, reinsurers are offering a limited amount of coverage for terrorist events for insurers’ remaining exposures, but insurers have not been buying much of this reinsurance. According to insurance industry sources, TRIA’s ceiling on potential losses has enabled reinsurers to return cautiously to the market. That is, reinsurers generally are not offering coverage for terrorism risk beyond the limits of the insurer deductibles and the 10 percent share that insurers would pay under TRIA (see app. I). In spite of reinsurers’ willingness to offer this coverage, company representatives have said that many insurers have not purchased reinsurance. Insurance experts suggested that the low demand for the reinsurance might reflect, in part, commercial policyholders’ generally low take-up rates for terrorism insurance. Moreover, insurance experts also have suggested that insurers may believe that the price of reinsurance is too high relative to the premiums they are earning from policyholders for terrorism insurance. The relatively high prices charged for the limited amounts of terrorism reinsurance available are probably the result of interrelated factors. First, even before September 11 both insurance and reinsurance markets were beginning to harden; that is, prices were beginning to increase after several years of lower prices. Reinsurance losses resulting from September 11 also depressed reinsurance capacity and accelerated the rise in prices. The resulting hard market for property-casualty insurance affected the price of most lines of insurance and reinsurance. A notable example has been the market for medical malpractice insurance. The hard market is only now showing signs of coming to an end, with a resulting stabilization of prices for most lines of insurance. In addition to the effects of the hard market, reinsurer awareness of the adverse selection that may be occurring in the commercial insurance market could be another factor contributing to higher reinsurance prices. Adverse selection usually represents a larger-than-expected exposure to loss. Reinsurers are likely to react by increasing prices for the terrorism coverage that they do sell. In spite of the reentry of reinsurers into the terrorism market, insurance experts said that without TRIA caps on potential losses, both insurers and reinsurers likely still would be unwilling to sell terrorism coverage because they have not found a reliable way to price their exposure to terrorist losses. According to industry representatives, neither insurers nor reinsurers can estimate potential losses from terrorism or determine prices for terrorism insurance without a pricing model that can estimate both the frequency and the severity of terrorist events. Reinsurance experts said that current models of risks for terrorist events do not have enough historical data to dependably estimate the frequency or severity of terrorist events, and therefore cannot be relied upon for pricing terrorism insurance. According to the experts, the models can predict a likely range of insured losses resulting from the damage if specific event parameters such as type and size of weapon and location are specified. However, the models are unable to predict the probability of such an attack. Even as they are charging high prices, reinsurers are covering less. In response to the losses of September 11, industry sources have said that reinsurers have changed some practices to limit their exposures to acts of terrorism. For example, reinsurers have begun monitoring their exposures by geographic area, requiring more detailed information from insurers, introducing annual aggregate and event limits, excluding large insurable values, and requiring stricter measures to safeguard assets and lives where risks are high. And as discussed previously, almost immediately after September 11 reinsurers began broadening NBC exclusions beyond scenarios involving industrial accidents, to include events such as nuclear plant accidents and chemical spills and encompass intentional destruction from terrorists. For example, post-September 11 exclusions for nuclear risks include losses from radioactive contamination to property and radiation sickness from dirty bombs. As of March 1, 2004, industry sources indicated that there has been little development or movement among insurers or reinsurers toward developing a private-sector mechanism that could provide capacity, without government involvement, to absorb losses from terrorist events. Industry officials have said that their level of willingness to participate more fully in the terrorism insurance market in the future will be determined, in part, by whether any more events occur. Industry sources could not predict if reinsurers would return to the terrorism insurance market after TRIA expires, even after several years and in the absence of further major terrorist attacks in the United States. They explained that reinsurers are still recovering from the enormous losses of September 11 and still cannot price terrorism coverage. In the long term and without another major terrorist attack, insurance and reinsurance companies might eventually return. However, should another major terrorist attack take place, reinsurers told us that they would not return to this market— with or without TRIA. Congress had two major objectives in establishing TRIA. The first was to ensure that business activity did not suffer from the lack of insurance by requiring insurers to continue to provide protection from the financial consequences of another terrorist attack. Since TRIA was enacted in November 2002, terrorism insurance generally has been widely available even for development projects in high-risk areas of the country, in large part because of TRIA’s “make available” requirement. Although most businesses are not buying coverage, there is little evidence that commercial development has suffered to a great extent—even in lower- risk areas of the county, where purchases of coverage may be lowest. Further, although quantifiable evidence is lacking on whether the availability of terrorism coverage under TRIA has contributed to the economy, the current revival of economic activity suggests that the decision of most commercial policyholders to decline terrorism coverage has not resulted in widespread, negative economic effects. As a result, the first objective of TRIA appears largely to have been achieved. Congress’s second objective was to give the insurance industry a transitional period during which it could begin pricing terrorism risks and developing ways to provide such insurance after TRIA expires. The insurance industry has not yet achieved this goal. We observed after September 11 the crucial importance of reinsurers for the survival of the terrorism insurance market and reported that reinsurers’ inability to price terrorism risks was a major factor in their departure from the market. Additionally, most industry experts are tentative about predictions of the level of reinsurer and insurer participation in the terrorism insurance market after TRIA expires. Unfortunately, insurers and reinsurers still have not found a reliable method for pricing terrorism insurance, and although TRIA has provided reinsurers the opportunity to reenter the market to a limited extent, industry participants have not developed a mechanism to replace TRIA. As a result, reinsurer and consequently, insurer, participation in the terrorism insurance market likely will decline significantly after TRIA expires. Not only has no private-sector mechanism emerged for supplying terrorism insurance after TRIA expires, but to date there also has been little discussion of possible alternatives for ensuring the availability and affordability of terrorism coverage after TRIA expires. Congress may benefit from an informed assessment of possible alternatives—including both wholly private alternatives and alternatives that could involve some government participation or action. Such an assessment could be a part of Treasury’s TRIA-mandated study to “assess…the likely capacity of the property and casualty insurance industry to offer insurance for terrorism risk after termination of the Program.” As part of the response to the TRIA-mandated study that requires Treasury to assess the effectiveness of TRIA and evaluate the capacity of the industry to offer terrorism insurance after TRIA expires, we recommend that the Secretary of the Treasury, after consulting with the insurance industry and other interested parties, identify for Congress an array of alternatives that may exist for expanding the availability and affordability of terrorism insurance after TRIA expires. These alternatives could assist Congress during its deliberations on how best to ensure the availability and affordability of terrorism insurance after December 2005. Mr. Chairman, this concludes my prepared statement, and I would be pleased to respond to any questions that you or other members of the Committee may have. For further information regarding this testimony please contact Richard J. Hillman, Director, or Lawrence D. Cluff, Assistant Director, Financial Markets and Community Investment, (202) 512-8678. Individuals making key contributions to this testimony include Rachel DeMarcus, Barry Kirby, Tarek Mahmassani, Angela Pun, and Barbara Roesmann. Under TRIA, Treasury is responsible for reimbursing insurers for a portion of terrorism losses under certain conditions. Payments are triggered when (1) the Secretary of the Treasury certifies that terrorists acting on behalf of foreign interests have carried out an act of terrorism and (2) aggregate insured losses for commercial property and casualty damages exceed $5,000,000 for a single event. TRIA specifies that an insurer is responsible (that is, will not be reimbursed) for the first dollars of its insured losses— its deductible amount. TRIA sets the deductible amount for each insurer equal to a percentage of its direct earned premiums for the previous year. Beyond the deductible, insurers also are responsible for paying a percentage of insured losses. Specifically, TRIA structures pay-out provisions so that the federal government shares the payment of insured losses with insurers at a 9:1 ratio—the federal government pays 90 percent of insured losses and insurers pay 10 percent—until aggregate insured losses from all insurers reach $100 billion in a calendar year (see fig. 1). Thus, under TRIA’s formula for sharing losses, insurers are reimbursed for portions of the claims they have paid to policyholders. Furthermore, TRIA then releases insurers who have paid their deductibles from any further liability for losses that exceed aggregate insured losses of $100 billion in any one year. Congress is charged with determining how losses in excess of $100 billion will be paid. TRIA also contains provisions and a formula requiring Treasury to recoup part of the federal share if the aggregate sum of all insurers’ deductibles and 10 percent share is less than the amount prescribed in the act—the “insurance marketplace aggregate retention amount.” TRIA also gives the Secretary of the Treasury discretion to recoup more of the federal payment if deemed appropriate. Commercial property-casualty policyholders would pay for the recoupment through a surcharge on premiums for all the property-casualty policies in force after Treasury established the surcharge amount; the insurers would collect the surcharge. TRIA limits the surcharge to a maximum of 3 percent of annual premiums, to be assessed for as many years as necessary to recoup the mandatory amount. TRIA also gives the Secretary of the Treasury discretion to reduce the annual surcharge in consideration of various factors such as the economic impact on urban centers. However, if Treasury makes such adjustments, it has to extend the surcharges for additional years to collect the remainder of the recoupment. Treasury is funding the Terrorism Risk Insurance Program (TRIP) office —through which it administers TRIA provisions and would pay claims— with “no-year money” under a TRIA provision that gives Treasury authority to utilize funds necessary to set up and run the program. The TRIP office had a budget of $8.97 million for fiscal year 2003 (of which TRIP spent $4 million), $9 million for fiscal year 2004, and a projected budget of $10.56 million for fiscal year 2005—a total of $28.53 million over 3 years. The funding levels incorporate the estimated costs of running a claims-processing operation in the aftermath of a terrorist event: $5 million in fiscal years 2003 and 2004 and $6.5 million in fiscal year 2005, representing about 55–60 percent of the budget for each fiscal year. If no certified terrorist event occurrs, the claims-processing function would be maintained at a standby level, reducing the projected costs to $1.2 million annually, or about 23 percent of the office’s budget in each fiscal year. Any funds ultimately used to pay the federal share after a certified terrorist event would be in addition to these budgeted amounts. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
After the terrorist attacks of September 11, 2001, insurance coverage for terrorism largely disappeared. Congress passed the Terrorism Risk Insurance Act (TRIA) in 2002 to help commercial property-casualty policyholders obtain terrorism insurance and give the insurance industry time to develop mechanisms to provide such insurance after the act expires on December 31, 2005. Under TRIA, the Department of Treasury (Treasury) caps insurer liability and would process claims and reimburse insurers for a large share of losses from terrorist acts that Treasury certified as meeting certain criteria. As Treasury and industry participants have operated under TRIA for more than a year, GAO was asked to assess Treasury's progress in implementing TRIA and describe how TRIA affected the terrorism insurance market. Treasury and industry participants have made significant progress in implementing TRIA to date, although Treasury has important actions to complete in order to comply with its responsibilities under TRIA. Treasury has issued regulations on TRIA, created and staffed the Terrorism Risk Insurance Program office, and begun mandated studies and data collection efforts. However, Treasury has not yet made a decision on whether to extend the mandate that insurers "make available" terrorism coverage, using terms not differing materially from other coverage, for policies issued or renewed in 2005. Treasury's ongoing studies and data collection efforts will provide further insight into TRIA's effectiveness. TRIA has enhanced the availability of terrorism insurance for commercial policyholders, largely fulfilling a principal objective of the legislation. In particular, TRIA has benefited commercial policyholders in major metropolitan areas perceived to be at greater risk for a terrorist attack, largely because of the requirement in TRIA that insurers offer coverage for terrorism. Prior to TRIA, GAO reported concern that some development projects had already been delayed or cancelled because of the unavailability of insurance and continued fears that other projects also would be adversely impacted. GAO also conveyed the widespread concern that general economic growth and development could be slowed by a lack of available terrorism insurance. Largely because of TRIA, these problems no longer appear to be major concerns. Despite increased availability of coverage, limited industry data suggest that most commercial policyholders are not buying terrorism insurance, perhaps because they perceive their risk of losses from a terrorist act as being relatively low. The potential negative effects of low purchase rates, in combination with the probability that those most likely to be the targets of terrorist attacks may also be the ones most likely to have purchased coverage, would become evident only in the aftermath of a terrorist attack. Such negative effects could include more difficult economic recovery for businesses without terrorism coverage or potentially significant financial problems for insurers. Moreover, those that have purchased terrorism insurance may still be exposed to significant risks that have been excluded by insurance companies, such as nuclear, biological, or chemical events. Finally, although insurers and some reinsurers have cautiously reentered the terrorism risk market to cover insurers' remaining exposures, industry sources indicated no progress to date toward finding a reliable method for pricing terrorism insurance and little movement toward any mechanism that would enable insurers to provide terrorism insurance to businesses without government involvement.
|
The LCS consists of two separate acquisition programs: one for the seaframe and one for the mission packages, which, when integrated with the seaframe and supplemented with aviation support, provide the ship’s mission capability. In order to demonstrate LCS mission capability both seaframe variants will be evaluated through developmental and operational testing. Developmental testing is intended to assist in identifying system performance, capabilities, limitations, and safety issues to help reduce design and programmatic risks. Operational testing is intended to assess a weapon system’s capability in a realistic environment when maintained and operated by warfighters, subjected to routine wear-and-tear, and employed in combat conditions. The Navy is procuring two different seaframe designs from shipbuilding teams led by Lockheed Martin—which builds its ships at Marinette Marine in Marinette, Wisconsin—and Austal USA in Mobile, Alabama. This report refers to the Lockheed Martin ships as the Freedom variant and the Austal USA ships as the Independence variant. The two designs reflect different contractor solutions to meet the same set of performance requirements. The most notable difference is that the Lockheed Martin Freedom variant (LCS 1 and other odd-numbered seaframes, 3 through 23) is a monohull design with a steel hull and aluminum superstructure, while the Austal USA Independence variant (LCS 2 and other even- numbered seaframes, 4 through 24) is an aluminum trimaran.shows the first two LCS seaframes. See table 1 for the current status of seaframe construction. As we previously reported, the Navy is investigating potentially significant design changes to the ships while production is under way. Some of these initiatives include: Changes to improve habitability: Part of the LCS concept is to reduce the number of crew on the ship by relying extensively on shore-based support for the ship’s administrative personnel and maintenance needs. Prior to the deployment of LCS 1 to Singapore, the Navy added 20 extra beds—called berths—to the ship to accommodate extra people, and has also made a similar change to LCS 2 and subsequent ships. However, the Navy did not add equivalent amounts of crew storage space, additional water and sanitation systems, and food storage, and while the Navy officials stated that the ships still meet requirements, the Navy is investigating changes to better meet Navy standards. The Navy and the shipyards are now evaluating how to make these changes to both variant designs, but the effects have been described by the program office as pervasive throughout much of the ship. Changes to increase commonality: many of the systems on the two seaframe variants are not common; commonality can enhance efficient maintenance, training, manning, and logistics. The Navy is investigating making changes to improve commonality between the two variants, including selecting a common combat management system—an architecture that uses computers to integrate sensors (such as a radar) with shipboard weapon systems—for both seaframes. Changes to improve safety: the Independence variant was designed without bridge wings, which are enclosed areas that extend out to the sides of the ship from the bridge to provide enhanced visibility and safety for the crew for maneuvers like docking the ship. The Navy has added bridge wings to LCS 2, and now plans to add bridge wings to all the ships of this variant. As part of the decision to make any design changes to a ship, the designer needs to consider the effect that the changes might have on ship weight. Weight is a critical aspect of a ship design, and is measured in several ways: Light ship condition: The ship is complete and ready for service, repair parts are held onboard, and liquids in machinery are at operating levels. Light ship condition does not include items of variable load, such as officers and crew; ammunition; aircraft and vehicles that are fully fueled with repair parts available; full supply of provisions and stores; and full tanks for potable water, lube oil, and fuel. For LCS, the light ship condition does not include installation of a mission package. Full load condition: Light ship condition plus variable loads. For LCS, full load condition includes an installed mission package and is the condition against which performance requirements are assessed. Naval architectural limit: The maximum weight that a ship can displace while still meeting its stability and survivability requirements. For LCS, naval architectural limits are unique to each seaframe variant. To ensure that ships meet required capabilities, the Navy and its shipbuilders typically engage in intensive estimating, weighing, and reporting processes throughout construction to identify and monitor a ship’s weight and stability. As part of these processes, shipbuilders actively estimate and track certain information, including the following: Builder’s margin, which consists of weight and vertical center of gravity allowances included in a weight estimate to cover slight variations of component weight and centers of gravity that take place throughout the design and construction of a ship. Service life allowances, which refer to weight and vertical center of gravity budgets included in a ship’s design to accommodate changes due to ship alterations and unplanned growth during the ship’s operational lifetime, which tend to increase displacement and affect stability. Weights are definitively determined as part of a ship’s inclining experiment, which involves moving known weights around the ship and measuring how they change the ship’s equilibrium. This allows for the Navy to determine a ship’s displacement and the height and longitudinal position of its center of gravity. For most ships, inclining experiments take place immediately prior to delivery and after significant post-delivery maintenance periods, when necessary. The LCS mission package designs are based on standard shipping containers that are outfitted with a variety of unmanned systems, sensors, and weapons that can be loaded onto and off of the seaframe. Mission packages are also accompanied by an aviation detachment, consisting of an MH-60 helicopter and its flight and support crew, as well as vertical take-off unmanned aerial vehicles. These packages are intended to give the Navy flexibility to change equipment in the field to meet different mission needs and incorporate new technology to address emerging threats. The Navy plans on fielding one anti-submarine warfare (ASW) increment and four mine countermeasures (MCM) and surface warfare (SUW) increments. The Navy will upgrade all mission packages to the same configuration as additional increments are fielded. The Navy plans to buy a total of 64 mission packages—though this quantity could change if the number of seaframes acquired is reduced. See table 2 for a brief discussion of the mission packages. To obtain operational experience with LCS, the Navy last year deployed USS Freedom carrying an increment 2 SUW mission package. The ship departed San Diego, California, for operations in the Western Pacific under the command of the Navy’s 7th Fleet in early March 2013. The ship was deployed for approximately 10 months, returning to the United States in late December 2013. The 7th Fleet area of responsibility poses unique challenges to the Navy given the vast distances it covers. While on this deployment, USS Freedom participated in an international exhibition as well as several multilateral naval exercises with regional navies, including Singapore; Malaysia; Brunei; and Indonesia. The ship also conducted some real-world operations as directed by the 7th Fleet, such as participating in humanitarian assistance disaster relief to the Philippines following a major typhoon. See figure 2 for a map showing the location of the Singapore deployment and the Navy fleet areas of responsibility in the Indo-Pacific region. Since July 2013, the Navy has made progress demonstrating and testing various facets of LCS systems and capability but significant gaps remain in the Navy’s knowledge of how the LCS will operate and what capabilities it will provide the Navy. The deployment to Singapore provided the Navy with an opportunity to examine key LCS concepts operationally, including: the ship’s smaller manning profile, rotational crewing, and use of off-ship maintenance and support. The deployment was limited because only one of the two variants carrying one mission package was deployed, and mechanical problems prevented USS Freedom from spending as much time as planned underway—that is, at sea unanchored and not at port. As a result, some key concepts could not be demonstrated. While the deployment provided useful insight for the Navy, it was never intended to be a substitute for formal testing and evaluation activities. The Navy has also completed additional developmental testing on the seaframes and mission packages, which has enabled the Navy to characterize performance of some systems, but many capabilities have not been demonstrated in an operational environment. Navy officials have stated that they were able to learn some valuable lessons from the deployment of USS Freedom to Singapore, and the Navy has taken several steps to analyze post-deployment lessons learned. According to the Navy, the deployment demonstrated the LCS’s ability to participate in cooperative exercises and helped carry out the Navy’s forward presence mission in Asia, thereby freeing up more costly multi-mission warships to carry out other high-priority Navy duties. Navy officials also said USS Freedom demonstrated how a LCS can fill the need for a smaller U.S. ship that can dock in more foreign ports than larger U.S. vessels, which they believe will be a valuable tool for engaging with certain countries that might otherwise be hard to access. Further, USS Freedom’s SUW mission package crew was able to conduct some launch and recovery operations with Rigid Hull Inflatable Boats and to participate in boarding exercises, which provided lessons about the operations of these boats as well as the systems on USS Freedom needed to launch and recover them. These operations, although undoubtedly useful for the Navy, were never intended as formal testing and evaluation activities—or to replace them. Therefore, key unknowns remain regarding how the Navy will eventually be able to use the LCS and how well the ship meets its performance requirements. USS Freedom’s deployment to Singapore represented an opportunity for the Navy to gain insight into the feasibility of the LCS’s unique operational concepts. However, some of the fundamental concepts on which the program is premised—such as the maintenance and manning concepts— were demonstrated in a limited manner because the deployment involved only one ship of the Freedom variant and this ship is not representative of other ships of that variant. In other cases, certain concepts have not yet been demonstrated. Therefore, some of the lessons learned from the deployment cannot be extrapolated to the entire LCS class or all of the ships planned for the Freedom variant. Until additional deployments have been conducted—in conjunction with operational test events—the Navy will have insufficient data to evaluate the feasibility of these concepts on both variants. Table 3 identifies the key concepts that underpin the program and the degree to which the Navy was able to Questions remaining regarding evaluate them on this deployment.LCS’s underlying concepts will, in turn, have implications for the practicality of certain requirements and key differences between the two variants—issues that have the potential to affect future acquisition decisions. In addition, mechanical problems hampered the Navy’s ability to operate USS Freedom as planned during the Singapore deployment. Based on information provided by LCS program officials responsible for LCS fleet introduction, USS Freedom’s mechanical failures resulted in 55 days of mission days lost, which is a significant portion of its 10-month deployment. Navy officials stated many of these days were planned in- port periods. The ship could not fully participate in at least two planned or requested exercises and some operational 7th Fleet presence missions while repairs were conducted. According to the LCS program office, several of the more problematic pieces of equipment that resulted in significant lost underway days are either slated to be replaced on follow-on Freedom class ships or have already have been replaced on LCS 3 or LCS 5. Table 4 depicts some of the most significant equipment failures and how the Navy believes they have been corrected on subsequent seaframes. Even with the reduced number of operational mission days, the Singapore deployment raised questions about the practicality of the small crew size. LCS is intended to operate with a crew that is smaller than comparable surface combatants. This reduced manning has long been a focus of Navy analysis, and the increase to a 50-person crew for the deployment was the maximum size allowed by current program requirements. This number was augmented by civilian contractor technical experts who could assist with troubleshooting and some maintenance. However, as shown in table 3, even with these additional people the deployment provided indications that the crew strained to keep up with duties. Further, the ship is currently at capacity in terms of the number of crew members it can accommodate. Therefore, any increase in crew size would require a significant redesign of the seaframes and would necessitate a revision to the maximum 50-person manning requirement, which has been validated by the Joint Requirements Oversight Council. Officials from the office of the Chief of Naval Operations told us after the deployment that they are not considering any further revision to the manning requirements for the ship, although some manning studies are under way to assess the rank and billet structure for the mission packages and aviation detachment. Additionally, some of USS Freedom’s equipment is unique to that ship, unique to the Freedom variant, or unique to LCS class, which we were told made some repair efforts cumbersome and slow. For example, crew members told us that in some cases they had to track down spare parts that were sometimes available only in foreign countries rather than being able to find them in Navy inventories. While some of this may be a first-of- class issue, replacement of some ship systems with more reliable ones or with systems that are more commonly found in the Navy inventory may also reduce these burdens on the crew in the future. According to the LCS program office, replacing these less reliable systems should mean that future deployments of other ships from the Freedom class should not incur the same number of failures as USS Freedom. However, as DOT&E has observed, no formal operational testing has been conducted to verify and quantify these improvements, although Navy and DOT&E officials said that they were working together on operational test opportunities. Until the Navy completes further underway periods and/or testing it will not be able to determine the significance of improvements on ship availability. This first deployment provided limited insight into how the LCS might be utilized in the different theaters in which the Navy operates. USS Freedom was deployed to Singapore and conducted operations for 7th Fleet largely consisting of participation in planned multi-lateral exercises. While 7th Fleet officials noted that a benefit of having LCS in theater was that the ship could participate in international exercises, thereby freeing up other surface combatants for other missions, they were still not certain about the ship’s potential capabilities and attributes, or how they would best utilize an LCS in their theater. Until the Navy completes additional testing and deployments, it will not have adequate operational data and operational experience on which to base assumptions regarding LCS utilization. Table 5 discusses some of the observations related to the LCS deployment that we discussed with 7th Fleet officials related to the LCS deployment. Since July 2013, the Navy has completed additional testing on the seaframes and mission modules and is seeing results but has not yet proven performance in an operational environment. Table 6 describes these recently completed events and some important considerations about the testing. Although the Navy has gained knowledge related to LCS capabilities and concepts since our July 2013 report, there continue to be significant acquisition risks to the program. Key among these is managing the weight of the ships. Initial LCS seaframes face limitations resulting from weight growth during construction of the first several ships. This weight growth has required the Navy to make compromises on performance of LCS 1 and LCS 2 and may complicate existing plans to make additional changes to each seaframe design. While weight growth is not uncommon over the life of a ship and the Navy builds a weight allowance into ships to account for this growth, LCS has significantly lower available margin compared to other ship classes. Compounding these issues, the Navy has not received complete or accurate weight reports from the LCS seaframe prime contractors—and the Navy’s lengthy review process has hindered a timely resolution. Additionally, as we have previously reported, the Navy has considerable testing to complete before the program demonstrates operational capability. Further, the Navy has continued with the acquisition without the knowledge that would be gained from additional testing. For example, since our last report, the Navy has granted the mission modules program approval to accelerate the mission package production rate before completing key test activities to demonstrate their performance. Weight growth occurred on the first four LCS seaframes, which affected the capabilities of both Freedom and Independence variant seaframes. This situation has led the Navy to accept lower than minimum requirements on two delivered seaframes (LCS 1 and 2) in endurance and sprint speeds, respectively. Further, weight growth has caused three delivered LCS seaframes (LCS 1, 2, and 4) to not achieve the required service life allowance for weight. Weight management in shipbuilding programs, including the LCS, is critical to ensuring that performance requirements associated with survivability, sea keeping (meaning the ability to withstand rough sea conditions), and the ability to accommodate upgrades during ship service lives, are met. For LCS seaframes, specific performance requirements that are sensitive to weight include the following: 3,500-nautical-mile range (endurance) when operated at a speed of 40-knot sprint speed, 20-foot navigational draft (the greatest depth, in feet, of the keel), 50-metric-ton service life allowance for weight, and 0.15-meter service life allowance for stability. In the LCS program, weight management and reporting processes also rely on accurate mission package weight data. The Navy provides mission package weight estimates to the shipbuilders to include in their full load condition estimates. Table 7 identifies the Navy’s current weight estimates for each of the first six LCS seaframes under full load conditions, along with current service life allowance projections for each ship as compared to the required 50 metric tons. As is depicted, there are several seaframes that do not have the required amount of service life remaining due to weight growth. LCS 2 faces the most significant weight challenges of any of the first six seaframes, but Navy officials stated they have a strategy to mitigate the issue while still meeting requirements. According to Navy estimates, LCS 2 is so heavy in the full load condition that it exceeds its naval architectural limit—an outcome that provides no service life allowance for weight and restricts the ship’s ability to execute its required missions. Subject matter experts in naval architecture who we interviewed stated that operating a ship in excess of its naval architectural limit can make it prone to failure in certain weather or damage conditions, and the ship can also see a decreased service life due to structural fatigue. Navy officials stated that they will limit fuel loads on LCS 2, as necessary, to ensure the naval architectural limit is not exceeded. In addition, the Navy is developing design modifications for Independence variant seaframes to reduce fuel capacity—estimated to total over 100 metric tons—in order to restore service life allowances. Although this reduction will reduce the endurance of these ships, the Navy reports that this variant has excess fuel capacity—as demonstrated during LCS 2 calm water trials in June 2013—and will still meet LCS range at transit speed requirements after the fuel reduction. Further, as table 7 shows, three of the other LCS seaframes—LCS 1, LCS 4, and LCS 6—also do not currently meet their service life allowance requirements for weight when configured in normal, full load conditions. For example, LCS 1 and LCS 4 have near or less than 50 percent of the required 50 metric ton service life allowances for weight (25.8 and 16.5 tons, respectively), and the Navy projects LCS 6 will enter service with less than 63 percent (31.3 metric tons) of its required 50 metric ton service life allowance for weight. At present, LCS 6 has over 29 metric tons of builder’s margin available, which if still available at delivery could offset that particular ship’s estimated service life allowance deficit for weight. Weight growth contributed to LCS 1 and LCS 2 not achieving some requirements related to endurance and sprint speed respectively, when operated in normal, full load conditions. For instance, although LCS 1 meets its sprint speed requirement of 40 knots, excess weight growth to date in part prevents that ship from achieving the 3,500 nautical miles at 14 knots endurance requirement. Alternatively, LCS 2 can only sprint at 39.5 knots under full loads, but is predicted to exceed the endurance requirement by over 800 nautical miles, albeit at potential risk to its naval architectural limit, as discussed above. Complicating the weight growth on early LCS seaframes is the fact that LCS requirements for service life allowances already fall short of the growth margins called for under Navy and industry recommended practice. Table 8 outlines recommended service life allowances for different ship types as compared to LCS requirements. Because of the LCS’s comparatively low service life allowance requirements, the Navy’s ability to accommodate alterations and growth on these ships over their expected 20-year minimum service lives will be significantly more constrained than is typical for other surface ships. In 2012, the Office of the Chief of Naval Operations highlighted the importance of this issue across ship classes, noting in an instruction that inadequate service life allowances for weight and vertical center of gravity have resulted in expensive corrective ship changes or in the inability to modernize ships through installation of new weapons systems. Navy program officials told us they expect that most future weight—and capability—growth on LCS would occur within mission packages, not seaframes. However, as we previously reported, the Navy is considering changes to the seaframe designs that could further increase weight estimates. As mentioned above, these changes could include increases in: (1) habitability to support larger crews than initially anticipated; (2) commonality between the seaframe variants and with other Navy ships; and (3) changes to improve safety. We reported in July 2013 that the Navy had undertaken several technical studies on these initiatives. These studies have not yet been completed. Navy officials stated that the possible changes are low risk and would not affect LCS performance requirements. However, some changes, including those related to habitability, are likely to add weight. According to the program office, weight considerations occur for every change and corresponding trades are required to be made in order to approve the change. For example: Early estimates indicate that roughly 10 to 20 metric tons could be added due to accommodations for a larger crew, and would require pervasive modifications to each seaframe design. If so, these changes would heighten weight challenges and resulting service life allowance shortfalls. A change was made to LCS 2 in a recent maintenance period to increase the size of the rescue boat from 5 meters to 7 meters, which will make the boat more stable in heavier seas. This change resulted in an approximately 15 metric ton weight increase to LCS 2; it is planned for all Independence variant seaframes. According to the contractor, by removing weight from other areas of LCS 6 and follow-on ships designs, the larger rescue boat will add 1.3 metric tons. Another proposed change would increase commonality and combat capability by replacing the Freedom variant’s rolling airframe missile system with the heavier missile system found on the Independence variant. While the specifics of this potential change have not yet been determined or approved, Navy technical experts told us that such a modification would subsequently increase the Freedom variant’s weight and could also result in center of gravity changes. Weight constraints could make future modifications more costly than anticipated. For instance, subject matter experts in naval architecture told us that the Navy may find it has to seek lighter alternatives to the systems or equipment it wants, which could complicate the redesign and construction modification efforts or make them more costly—or both. Because the Navy has not yet completed technical studies evaluating its possible changes, the weight effects remain unknown. According to Navy officials, preliminary studies on the habitability changes should be completed this year, and more detailed design work will not occur until fiscal year 2015. The Navy has established a weight working group with Navy and shipyard representatives that program officials said is intended to try and identify ways to offset weight growth from some of these design changes. Additionally, once a ship is delivered and handed over to the fleet, fleet operators and maintainers assume responsibility for these weight management processes, which continue throughout the ship’s service life. For ships that are weight constrained—meaning, at or nearing their naval architectural limits for displacement—these weight management processes are typically more robust and costly. For instance, a Navy instruction states that weight must be kept within naval architectural limits and provides that for ships that are weight constrained, any additional weight must be compensated for by removing weight from the ship. Inclining experiments must usually be completed following maintenance periods to ensure the ship’s naval architectural limits remain unbreached. As operational assets, LCS 1 and LCS 2 are—according to Navy reporting—both in a weight constrained status. According to Navy officials, the seaframes have low growth margins because the mission packages are supposed to be flexible enough to accommodate any future upgrades and growth. However, weight challenges exist on the mission packages as well, and weight and space constraints are limiting the extent to which the Navy can accommodate new mission package systems. Similar to the seaframes, the Navy also tracks and manages mission package weights. Mission packages (regardless of which type) are required to consume no more than 180 metric tons when installed aboard a seaframe. Of this 180-metric-ton allocation, 105 metric tons are allotted to the actual mission package equipment, whereas 75 metric tons are reserved for fuel to power that equipment. However, LCS requirements documents do not include a service life allowance requirement for mission packages, and based on current weight estimates, room for future growth on the final increments ranges from approximately 14 metric tons for some configurations of the MCM mission package to none for the ASW mission package. According to Navy officials, future additions to mission packages—beyond the systems currently planned for increment 4 configurations—will be offset by removing existing systems, described below, to the extent required to meet the 105 metric ton weight limitation. For MCM, Navy officials stated that they cannot include all the current increment 4 systems that they are buying in a package at one time, so they have recently developed two options with different system configurations. Figure 3 highlights current mission package weights that are estimated for each increment of mission package capability, including the two MCM options but excluding potential weight reduction efforts. At present, the Navy anticipates that the equipment associated with an increment 4 SUW package will require slightly less than the 105-metric- ton allotment. This estimate is contingent on surface-to-surface missile systems not yet selected delivering within the assigned weight margins. The Navy has identified a weight reduction plan to provide an additional 5 metric tons. Navy weight estimates for increment 4 of the MCM mission package, however, do not reflect all the systems being acquired for that package. Space and weight constraints have required the Navy to modify how it intends to outfit increment 4 of the MCM mission package. Although the Navy plans to acquire all the systems planned for that increment, space and weight limitations will not allow LCS seaframes to carry all of these systems at one time. According to LCS program officials, MCM mission commanders will have either (1) the Unmanned Influence Sweep System and the unmanned surface vehicle that tows it, or (2) the minehunting Surface Mine Countermeasures Unmanned Undersea Vehicle—called Knifefish—available—but not both systems. As a result, LCS seaframes outfitted with the increment 4 MCM package may have decreased minesweeping or mine detection capability. These scenarios would preclude LCS from meeting its MCM minesweeping performance requirements; Knifefish is a new capability that was recently added to the program, and officials from the mission module program office stated that it is not a capability currently defined in LCS requirements documentation. The Navy has identified some options for weight reduction for this package that could bring the combined weight with these two systems included together to just under 105 metric tons, but physical space constraints would still prohibit both being carried together. Further, ASW mission package equipment is also estimated to exceed its 105-metric-ton allotment by approximately 4 metric tons. In response, the Navy has identified weight reduction options within that package that it estimates will shed a combined 10 metric tons of weight. Several of these options require redesign of existing systems, which could introduce risk. The Navy has faced challenges obtaining accurate and complete weight estimates from the contractors. The Navy’s primary mechanism for tracking seaframe weights are quarterly weight reports, which are produced and delivered by each of the LCS prime contractors per contract requirements. These reports provide data on the physical characteristics of seaframes under construction, including the magnitude, location, and distribution of weight within each ship. These data are based on estimated and calculated weights derived from design drawings, historical data, and vendor-furnished information, and are updated with actual component weight information during construction by the shipbuilder. Under the terms of their contracts, LCS prime contractors are required to prepare and report data within weight reports in accordance with Navy and industry recommended practices, which include using the Navy’s Expanded Ship Work Breakdown Structure (ESWBS) classification system to structure and summarize data. ESWBS facilitates the grouping of materials, equipment, and ship components in a consistent reporting format, which in turn positions Navy reviewers to audit the contractors’ work. At their highest level, ESWBS groupings are organized around major systems of the ship, such as the hull structure and propulsion plant, but ESWBS groupings are broken down to include individual components of these respective systems, such as diesel engines. An inclining experiment is important as it represents the point where weight data transitions from estimated or calculated data into actual data. According to Navy officials, when the weight of a ship is determined at an inclining experiment, the weight totals should be very close to those identified in the preceding weight reports. Acceptable deviation is considered to be only 0.5 percent or less, according to Navy technical experts. In the LCS program, however, inclining experiments for the first two seaframes revealed weight growth that the prime contractors had not fully accounted for within their weight reports. For example, the LCS 1 inclining experiment that followed that ship’s initial post-delivery work periods revealed that the ship weighed approximately 90 metric tons more than expected. However, it was unclear to the prime contractor where this excess weight was located or how it was distributed within the ship, though Navy program officials told us that they now believe it was due largely to additional insulation and paint. In response, Lockheed Martin increased its weight estimates for LCS 3 and worked with the Navy to evaluate and resolve the 90 metric ton discrepancy. As part of these analyses, Lockheed Martin was able to assign much of the weight growth to individual ESWBS accounts, and subsequently inclined LCS 3 to within 1 percent of that ship’s revised weight estimate. However, full resolution of the 90 metric tons weight discrepancy remains incomplete. Weight reports for LCS 5 and follow-on ships identified over 23 metric tons of weight that Lockheed Martin and the Navy have not yet identified as belonging within specific ESWBS accounts. Similar to LCS 1, LCS 2’s inclining revealed an approximately 5 percent deviation from expected weight, and General Dynamics’ Bath Iron Works and Austal USA have over 13 metric tons of weight, outside of the ESWBS accounts, on later Independence variant seaframes. Carrying forward excess weight can have impacts on the ships over the course of their service lives, and may have impacts on construction of follow-on seaframes. In order to remedy persisting deficiencies in the weight reports, Navy officials stated that the administrative contracting officer—responsible for ensuring that the contractor is fulfilling the contract under the specified terms, including price, schedule, and quality—could withhold a percentage of progress payments. To date, however, Navy officials report that they have not pursued such withholds in the LCS program. As part of the terms of the block buy contracts for LCS seaframes, the Navy is required to review and comment on weight reports within 60 days after they are submitted by a prime contractor. During the past 2 years, the Navy has, in several cases, provided detailed comments back to the contractors on weight reports that it identified as deficient. Comments back to the prime contractors identified fundamental classification and estimating errors and the use of outdated weight information, among other reporting deficiencies, which the Navy judged as time sensitive and critically important to address (see appendix II, which contains excerpts from Navy comments on contractor weight reports). The Navy often requested LCS prime contractors to modify and resubmit the report within 30 days—consistent with the terms of the contracts. However, the prime contractors have not addressed the Navy’s comments and resubmitted weight reports, largely because the Navy’s review is typically taking longer than 60 days. One contractor stated that the Navy’s review took 6 to 12 months. According to Navy officials, reviews of weight reports now take less than 6 months. Nevertheless, by the time that the Navy’s comments for one particular report are submitted to the contractor, the contractor has often already submitted the next quarter’s weight report. As a result, LCS contractors stated to us that they generally do not make revisions to the previously submitted reports. Thus, issues raised in the Navy’s comments are not immediately addressed by the contractors. These revisions can affect weight estimates, but cannot be identified until a corrected weight report has been submitted. As a result, serious issues within weight reports persist that could obscure timely identification of negative weight trends within one or both seaframe variants. As we previously reported, the Navy’s acquisition approach to the LCS program involves a significant degree of concurrency; that is, the Navy is buying the ships while key concepts and performance are still being tested. Since we issued our report in July 2013, the Navy received approval from DOT&E for the LCS test and evaluation master plan (TEMP), which sets forth the testing that must be completed to ensure that the program meets requirements. In order to determine the degree to which the information from test events would be available to inform the Navy’s decision to purchase additional seaframes, we compared the Navy’s current acquisition strategy—which calls for releasing a request for proposals in 2015 in support of a planned 2016 award of future seaframe contracts—with the program’s test schedule as outlined in the most recent TEMP. Figure 4 illustrates the test events that will be completed before and after these acquisition events. As shown in figure 4, we found that a number of significant test events as outlined in the TEMP will not be completed in time to inform the development or release of a request for proposals or the award of follow- on contract(s), or they will be completed on one variant but not both. Many of these test events are part of operational testing. Operational testing includes live-fire testing, which provides timely assessment of the survivability and lethality of a weapon system. The significance of conducting operational testing is reflected by the fact that statute requires a program to complete realistic survivability tests and initial operational testing before starting full rate production. The Navy plans to continue buying seaframes before completing operational test events that demonstrate the capability of the seaframes, that is, equipped with mission packages that can meet initial requirements. Other tests highlighted in the figure include shock and survivability tests, which demonstrate that the ship designs can safely absorb and control damage. Realistic survivability tests are required by statute before a program proceeds beyond low-rate initial production. Moreover, based on current test plans, DOT&E has concerns about the adequacy and nature of some tests, which led to lengthy revisions of earlier versions of the TEMP. Due to these concerns, DOT&E issued a conditional approval letter stating that the test plan was not adequate to support later phases of operational testing, and that the out-years of the program are still not well defined. Final performance requirements are defined in the program’s capabilities development document, and last year the Navy developed requirements for increment 2 SUW and increment 1 MCM to support testing. However, no requirements currently exist for the other increments. DOT&E granted the Navy approval to move to the operational testing of increment 2 SUW and increment 1 MCM as described in the TEMP, which the Navy plans to begin in 2014 and 2015, respectively, but DOT&E required the Navy to update and resubmit the TEMP to support testing for later increments. As such, the above schedule may change with subsequent TEMP submissions. To help mitigate the concurrency in the LCS program—in particular to better align planned contractual actions with obtaining knowledge through some of these test events—we recommended in July 2013 that the Navy reassess its acquisition strategy. Specifically, we recommended that the Department of Defense (DOD) limit future seaframe acquisitions until it completed a full-rate production review. We also recommended that DOD report to Congress on the relative advantages of each seaframe variant for each key LCS mission prior to awarding any additional seaframe contracts. In its written response, DOD did not agree with our recommendations aimed at slowing the pace of seaframe procurements. DOD cited the need to buy ships at the planned pace to keep pricing low and saw no value in reducing production pending the full-rate production decision. DOD agreed that the Navy could, if requested by Congress, report on the performance of each seaframe variant against current LCS requirements, but did not address the need to provide an assessment of the relative costs and advantages and disadvantages of the variants against operational and mission needs. Such steps remain important to help ensure that the level of capability provided by LCS is militarily useful given the warfighter’s current capability needs and that continued investment in the program is warranted. The Navy continues to move forward with a strategy that buys mission packages before their performance is demonstrated. The Navy held a Milestone B review for the mission packages in January 7, 2014, which would typically authorize a program to begin system design and demonstration efforts and determine the low-rate production quantity, which is necessary to—among other things—provide production configured or representative articles for operational tests. Based on this review, the Assistant Secretary of the Navy for Research, Development, and Acquisitions authorized the program to effectively accelerate mission package production, granting the program approval to procure 5 test units According to DOD guidance, and up to 27 production mission packages.low-rate production usually begins at Milestone C, when programs are authorized to begin initial production. As we highlighted in our last report, continuing into what is essentially full-rate production—as this 32 mission packages is half of the total planned quantity of mission packages for the program—increases the risk that the Navy will be purchasing systems that have not been validated to meet requirements through testing. We also recommended in July 2013 that the Navy only buy the minimum quantities of mission module systems required to support operational testing. DOD did not agree with this recommendation, stating that mission package procurements were at a rate necessary to: support (1) developmental and operational testing of the two seaframe variants with each mission module increment; (2) fleet training needs; and (3) operational LCS ships. In its memorandum on the Navy’s Remote Minehunting System (RMS) operational assessment and Milestone C decision, DOT&E raised similar concerns, stating for example that the Navy should strictly limit any production of RMS, including the Remote Multi-mission Vehicle (RMMV), until greater system maturity and reliability are demonstrated on the version of RMS that will be initially fielded. We also recommended in July 2013 that the Navy ensure that the program baseline submitted for the mission modules’ Milestone B establish program goals for cost, schedule, and performance for each mission module increment. DOD partially concurred with this recommendation, but our review of the program baseline found that it does not define the thresholds and objectives for performance for each increment of the mission modules. Officials from the office of the Chief of Naval Operations stated that there is no plan to update the baseline to include this performance information since they believe that the LCS mission package increments actually represent only one increment of capability. Without defined performance thresholds and objectives for each mission package increment, decision makers will continue to lack information needed to effectively monitor the development of the increments, and a baseline against which to measure performance. We raised two matters for Congressional consideration in our July 2013 report. First, to ensure that continued LCS investments are informed by adequate knowledge, we suggested that Congress consider restricting funding for additional seaframes until the Navy completes ongoing technical and design studies related to potential changes in LCS requirements, capabilities, and the commonality of systems on the two seaframe variants. Second, to ensure timely and complete information on the capabilities of each seaframe variant prior to making decisions about future LCS procurements, we suggested that Congress consider requiring DOD to report on the relative advantages of each variant in carrying out the three primary LCS missions. In the National Defense Authorization Act (NDAA) for Fiscal Year 2014, Congress directed the Navy to complete a number of studies that are in line with our recommendations to provide additional information on some of the risk areas that we identified. The legislation restricts the obligation or expenditure of fiscal year 2014 funding for construction or advanced procurement for LCS seaframes 25 and 26 until the Navy submits the required reports and certifications. However, as LCS 25 and LCS 26 are not yet under contract, the Navy cannot use fiscal year 2014 money to fund these seaframes. As of the end of January 2014, Navy officials told us that they had just begun coordinating efforts and collecting data to write reports as required in the NDAA for Fiscal Year 2014. A copy of this NDAA requirement can be found in appendix III. The Navy has made progress since our last report in demonstrating LCS capabilities. In particular, completing the initial deployment of an LCS with a mission package to an overseas location provided the Navy with important real-world lessons learned that are being used to refine plans for subsequent deployments. However, these deployments are not a substitute for operational testing. Completing further developmental and operational test events will continue to provide the Navy will valuable data with which it can evaluate the performance of systems and make adjustments, as needed. The Navy still has a great deal of learning to do about the ships, the integrated capability that they are intended to provide when equipped with the mission packages, and how the overall LCS concept will be implemented. Not having adequate knowledge—such as the results of additional deployments and key operational test events— may result in the Navy buying ships that are more costly or burdensome to manage over the course of their service lives. Events such as rough water trials, shock and total ship survivability trials are intended to provide confidence that the ships will last their intended lifespan and are survivable, while deployments and operational testing of initial mission packages help provide confidence that the LCS will meet its performance requirements. Moving forward without this information complicates potential design changes to seaframes or mission packages. As we have concluded in past work, the Navy’s continued approach of procuring the ships before proving their capabilities through testing increases the risks of costly retrofits or reduced performance. In addition, the Navy’s recent decision to accelerate the acquisition of mission packages further limits the flexibility that the program will have to adjust to any problems that may arise during operational testing. With the Navy’s planned fiscal year 2016 contract awards for seaframes fast approaching, we believe the recommendations that we made in July 2013 are still important steps that the Navy can take to reduce risks to the program, but additional steps are also warranted. Further, the Navy’s ability to manage the ships’ weight has been constrained, as the contractors’ reporting has not been accurate or in a format that would be most useful to naval engineers. The Navy could improve the expediency with which it reviews and comments on contractor weight reports. Tools are available to improve the contractor’s weight reporting, such as pursuing financial withholds, and modifying the LCS contracts to include additional mechanisms to ensure better reporting. More accurate and timely reporting will help the Navy target the drivers of weight growth and assess the feasibility of the additional design changes being considered for both seaframe variants. 1. We recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics require—before approving the release of the request for proposals for future contracts for either seaframe variant—that both variants: a. Have deployed to a forward overseas location; b. Have completed rough water, ship shock, and total ship survivability testing; and c. Have completed initial operational test and evaluation of the SUW mission package on the Freedom variant and the MCM mission package on the Independence variant. 2. To improve the Navy’s ability to effectively oversee weight management of the LCS seaframes, we recommend the Secretary of the Navy direct the LCS Seaframe Program Manager to a. Take steps to ensure that the Navy completes its reviews and submits comments, if any, on the weight reports to the contractors within the timeframes dictated by the contract; and b. Consider actions to make the contractor more responsive to the Navy’s identified accuracy and content problems in the weight reports, including pursuing financial withholds or modifying the contract language. We provided a draft of this report to DOD for review and comment. In its written comments, which are included in appendix IV, DOD partially agreed with our recommendations to complete certain testing and deployment activities before approving the release of the request for proposals for future seaframes. DOD agreed with our recommendations related to seaframe weight management. DOD officials stated that they have every intention of completing as many as possible of the test and demonstration items that we identified in our recommendation before releasing the request for proposals (RFP) for future seaframe contracts, but disagreed that the release of the RFP should hinge on completion of these events. DOD officials stated that creating a break in the production of the seaframes would increase program costs and have significant industrial base considerations. We are not advocating a production break, but we do believe it is conceivable that subsequent seaframe unit cost increases could be lower than the potential increases in overall program costs if testing uncovered the need for costly retrofits, redesign, and/or requirements changes that would then have to be made to ships in production. We chose to use the release of the RFP as a decision point because we believe that drafting an RFP that is based on key knowledge of LCS performance serves as an important risk mitigation tool for the government. Specifically, if the government goes forward with an RFP that is not fully informed by the results of the important test activities we identified in our recommendation, any changes that might be later identified as necessary would have to be reflected in an amended RFP, which could delay the award of contracts and potentially cause a production break. The department noted that a Defense Acquisition Board (DAB) review is planned in the fiscal year 2016 time frame and that the Board will approve the Navy’s acquisition strategy for LCS before additional seaframe contracts are awarded. DOD stated that this review will take into account the progress of testing for both seaframes, and that every item we identified in our recommendation will be completed prior to the DAB except for the completion of the full- scale ship shock trials. We believe it will be important that the department makes certain that the DAB review occurs at a point when the Navy can be directed to pause and revise its acquisition strategy and the RFP for LCS if necessary to ensure it reflects the most current knowledge gained from testing and deployments. It is possible that continued testing could inform changes to the numbers of each variant procured, changes that would need to be incorporated into the acquisition strategy before the DAB authorizes the Navy to continue to buy more seaframes. Further, we continue to believe that the Navy needs to identify a means to conduct a full-scale ship shock trial before committing to contracts for further seaframes. Because the LCS seaframes are based on commercial designs—though heavily modified—we believe these trials are important to ensure that the Navy is buying ships that will meet its survivability needs. This is especially true with the Independence class variant, which is based on a novel hullform for the Navy and represents the Navy’s first- time use of aluminum for a ship of this size. The Navy has itself identified that it lacks sufficient data on which to confidently base assumptions of this variant’s performance in an underwater shock environment, which makes completing this test event before DAB review and award of contracts important. DOD agreed to take steps as we recommended to improve the weight management of the LCS seaframes, and plans to review within 180 days the process by which it reviews the contractor weight submissions and the methods by which it can ensure that the contractors are responsive to Navy accuracy and content concerns. We also provided relevant portions of the draft report (in particular, the sections on weight management) to the contractors and incorporated their technical comments as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Defense and the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To assess the Navy’s lessons learned from the deployment of the first Littoral Combat Ship (LCS) to Singapore, we analyzed reports from various LCS stakeholders, including Navy 7th Fleet Destroyer Squadron 7 (responsible for LCS during the deployment), and the Office of the Chief of Naval Operations (OPNAV). We also traveled to the forward-deployed location in Singapore, and interviewed USS Freedom’s commanding officer and some of the crew; as well as officials from the LCS fleet introduction program office (PMS 505); Destroyer Squadron 7; and Commander, Logistics Force Western Pacific. We also traveled to Japan to interview 7th Fleet officials involved with LCS logistics; policy and planning; warfare requirements; strategy; and operations. Furthermore, we conducted interviews with relevant Navy officials, such as the OPNAV office that is the resource sponsor for the LCS program (N96); LCS seaframe program office (PMS 501); and the LCS and Joint High Speed Vessel Council. To assess what knowledge the Navy has obtained about LCS since our previous report, we analyzed DOD, Navy and contractor documents, including test and evaluation letters of observation from the Commander, Operational Testing and Evaluation Force (COTF); testing reports from the Director, Operational Testing and Evaluation (DOT&E); as well as the Board of Inspection and Survey (INSURV) reports. We analyzed documentation from the LCS mission module program office (PMS 420), including an LCS contractor test report. Furthermore, we interviewed officials from OPNAV; the LCS and Joint High Speed Vessel Council; Naval Sea Systems Command (NAVSEA); DOT&E; COTF; INSURV; the Naval Surface Warfare Center; the LCS seaframe program office; the LCS mission module program office; and the Navy Modeling and Simulation Office. Finally, we leveraged previous GAO reports on the LCS dating back to 2005. To assess additional risks for the LCS program related to weight management, we analyzed Navy and contractor documentation including weight reports; inclining experiment reports; LCS seaframe contracts; the LCS Capabilities Development Document; and seaframe building specifications. To understand weight management and reporting practices, we analyzed the Society of Allied Weight Engineers Recommended Practices and NAVSEA policies on weight management. Furthermore, we conducted interviews with Lockheed Martin; Bath Iron Works; Marinette Marine; Austal USA; the American Bureau of Shipping; and the Navy’s Supervisor of Shipbuilding. To evaluate the naval architecture limits of the LCS seaframes, we interviewed retired naval architects with significant Navy ship design experience, as well as American Bureau of Shipping representatives. We also met with technical experts from the Naval Systems Engineering Directorate (SEA05).We analyzed Navy and contractor documents, including the LCS test and evaluation master plan; DOT&E test and evaluation master plan approval memo; the LCS mission modules Milestone B documentation; and the Navy’s acquisition decision memorandum. Furthermore, we conducted interviews with officials from DOT&E; COTF; and INSURV. We conducted this performance audit from September 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Navy Comments on Recent LCS Weight Reports Navy comment “Main structural elements such as shell, framing and decks are calculated by frame… This results in nearly impossible review and audit capability. This method does not adhere to Extended Ship’s Work Breakdown Structure (ESWBS)…” Associated prime contractor Austal USA “Weight report structure and format are problematic. The subject quarterly weight report consisted of various disjointed, mislabeled, and conflicting files which are difficult to assemble and correlate for reporting purposes.” LCS 6 and follow-on Independence variant seaframes “It appears the computer aided design effort to incorporate detail data for Group 1 weights is fraught with ESWBS classification errors…It appears Group 1 has been arbitrarily classified to the point of a loss of control of the details.” LCS 6 and LCS 8 “The LCS 3 quarterly weight reports overestimated the ship by approximately 90 tons. The details of that same database have essentially been carried over to the LCS 5 accepted weight estimate and subsequent quarterly weight reports…The last two quarterly weight reports have not shown any progress toward improving the details (via recalculations) and reducing risk…” Lockheed Martin LCS 5 and “The LCS 6 and LCS 8 calculated light ship condition with remaining margin is projected to be 56 tons lighter than LCS 4 inclined light ship condition. When the…correction is incorporated into the projection, the full load condition for weight exceeds the (naval architectural limit) for displacement by…30 tons. Specifically, the service life allowance for weight is deficient by 30 tons. This is a serious and unprecedented situation that has to be addressed quickly.” Non-submission “Submittal did not include an updated quarterly weight report…a no-submittal indicates the projected full load of the ship at delivery is currently unknown.” LCS 6 and follow-on Independence variant seaframes “Mission package descriptions, weights and centers have not been updated in years. The Contractor must update the mine countermeasures and surface warfare mission packages with the latest values.” Section 124 of the National Defense Authorization Act for Fiscal Year 2014 restricts the obligation or expenditure of fiscal year 2014 funding for construction or advanced procurement for LCS seaframes 25 and 26 until the Navy submits the required reports and certifications. This section reads: SEC. 124. LIMITATION ON AVAILABILITYOF FUNDS FOR LITTORAL COMBAT SHIP (a) LIMITATION—None of the funds authorized to be appropriated by this Act or otherwise made available for fiscal year 2014 for construction or advanced procurement of materials for the Littoral Combat Ships designated as LCS 25 or LCS 26 may be obligated or expended until the Secretary of the Navy submits to the congressional defense committees each of the following: 1) The report required by subsection (b)(1). 2) A coordinated determination by the Director of Operational Test and Evaluation and the Under Secretary of Defense for Acquisition, Technology, and Logistics that successful completion of the test evaluation master plan for both seaframes and each mission module will demonstrate operational effectiveness and operational suitability. 3) A certification that the Joint Requirements Oversight Council— a) has reviewed the capabilities of the legacy systems that the Littoral Combat Ship is planned to replace and has compared such capabilities to the capabilities to be provided by the Littoral Combat Ship; b) has assessed the adequacy of the current capabilities development document for the Littoral Combat Ship to meet the requirements of the combatant commands and to address future threats as reflected in the latest assessment by the defense intelligence community; and c) has either validated the current capabilities development document or directed the Secretary to update the current capabilities development document based on the performance of the Littoral Combat Ship and mission modules to date. 4) A report on the expected performance of each seaframe variant and mission module against the current or updated capabilities development document. 5) Certification that a capability production document will be completed for each mission module before operational testing. (b) REPORT— (1) IN GENERAL—Not later than 60 days after the date of the enactment of this Act, the Chief of Naval Operations, in coordination with the Director of Operational Test and Evaluation, shall submit to the congressional defense committees a report on the current concept of operations and expected survivability attributes of each of the Littoral Combat Ship seaframes. (2) ELEMENTS—The report required by paragraph (1) shall set forth the following, a) A review of the current concept of operations of the Littoral Combat Ship and a comparison of such concept of operations with the original concept of operations of the Littoral Combat Ship. b) An assessment of the ability of the Littoral Combat Ship to carry out the core missions of the Cooperative Strategy for 21st Century Seapower of the Navy. c) A comparison of the combat capabilities for the three missions assigned to the Littoral Combat Ship seaframes (anti-surface warfare, mine countermeasures, and anti-submarine warfare) with the combat capabilities for each of such missions of the systems the Littoral Combat Ship is replacing. d) An assessment of expected survivability of the Littoral Combat Ship seaframes in the context of the planned employment of the Littoral Combat Ship as described in the concept of operations. e) The current status of operational testing for the seaframes and the mission modules of the Littoral Combat Ship. f) An updated test and evaluation master plan for the Littoral Combat Ship. g) A review of survivability testing, modeling, and simulation conducted to date on the two seaframes of the Littoral Combat Ship. h) An updated assessment of the endurance of the Littoral Combat Ship at sea with respect to maintenance, fuel use, and sustainment of crew and mission modules. i) An assessment of the adequacy of current ship manning plans for the Littoral Combat Ship and an assessment of the impact that increased manning has on design changes and the endurance of the Littoral Combat Ship. j) A list of the casualty reports to date on each Littoral Combat Ship, including a description of the impact of such casualties on the design or ability of that Littoral Combat Ship to perform assigned missions. (3) FORM—The report required by paragraph (1) shall be submitted in classified form and unclassified form. In addition to the contact named above, the following staff members made key contributions to this report: Diana Moldafsky (Assistant Director); Greg Campbell; Jenny Chow; Christopher R. Durbin; G. Oliver Elliott; Laura Greifner; Kristine Hassinger; Kenneth Patton; C. James Madar; Sabrina Streagle; Roxanna Sun; and Hai Tran.
|
LCS represents an innovative approach to Navy acquisitions and operations, consisting of a ship—called a seaframe—and reconfigurable mission packages. These packages provide combat capability to perform three primary missions: surface warfare; mine countermeasures; and anti-submarine warfare. The Navy plans to buy no more than 32 seaframes in two variants from two shipyards, and 64 mission packages, with an estimated acquisition cost of over $25 billion in 2010 dollars. GAO was mandated to examine elements related to the LCS program. This report examines (1) knowledge that the Navy has gained since GAO issued a report on the LCS program in July 2013 and (2) outstanding acquisition risks with the LCS program. GAO analyzed key documents, including test and weight reports, and interviewed Navy officials responsible for the LCS deployment and program officials. This report is a public version of a sensitive but unclassified report issued in April 2014. Since July 2013, the Navy has continued to demonstrate and test various facets of Littoral Combat Ship (LCS) systems and capability, but important questions remain about how LCS will operate and what capabilities it will provide the Navy. The first operational deployment of an LCS to Singapore gave the Navy an opportunity to examine key LCS concepts operationally. The deployment was limited to only one of the two variants carrying one of three mission packages. In addition, mechanical problems prevented the ship from spending as much time operationally as planned. As a result, some key concepts could not be tested. The Navy has completed some additional testing on the seaframes and mission packages, which has enabled the Navy to characterize performance of some systems, but performance has not yet been demonstrated in an operational environment. Outstanding weight management and concurrency risks related to buying ships while key concepts and performance are still being tested continue to complicate LCS acquisitions. Initial LCS seaframes face capability limitations resulting from weight growth during construction. This weight growth has resulted in the first two ships not meeting performance requirements for sprint speed and/or endurance, as well as potentially complicating existing plans to make additional changes to each seaframe design. Several seaframes now do not have the required amount of service life allowance—margin to accommodate future changes without removing weight over the ship's lifetime—but Navy officials said they have a plan to recover the service life allowance on the Independence class variant. b The Navy has not received accurate or complete weight reports from the seaframe prime contractors, and the Navy's lengthy review process has hindered a timely resolution of the Navy's concerns. Additionally, a number of significant test events, including rough water, shock and total ship survivability trials, will not be completed in time to inform upcoming acquisition decisions—including future contract decisions. Finally, the Navy's recent decision to accelerate low rate initial production of mission packages above the quantity necessary for operational testing limits the flexibility that the program will have to adjust to any problems that may arise during operational testing. GAO recommends that the Navy (1) demonstrate certain capabilities for both LCS seaframe variants before the Navy is approved for future contract awards and (2) ensure a timely review of contractor seaframe weight reports and take actions to make contractors more responsive to comments on the reports' content. DOD agreed with the weight report recommendation and partially agreed with the other, noting that it intends to complete as much testing as possible—but not all—before releasing the request for proposals for future contracts.
|
The ICR gas turbine engine program was established in the mid-1980s to develop an improved surface ship propulsion system that would be fuel efficient. In December 1991, the Navy awarded a contract to the Westinghouse Electric Corporation for the advanced design and an option for full-scale development of the engine. Their engine development team includes Rolls-Royce Public Limited Company (United Kingdom), AlliedSignal Aerospace Incorporated, and CAE Electronics. The engine is essentially an advanced gas turbine engine, similar to the one used on a large commercial aircraft. It is being adapted for marine use by adding a recuperator, an intercooler, and other major components. Housed in a special enclosure, the engine also has a lube oil module, an off-engine intercooling module, and a digital control system specifically built for shipboard application. A critical component of the engine is the recuperator. The recuperator uses engine exhaust to preheat compressed air before fuel combustion, allowing the engine to use less fuel. For example, the Navy expects the ICR engine to achieve a weighted average improvement of 30 percent in fuel efficiency for a mechanical drive destroyer. Figure 1.1 shows a cut-away drawing of the ICR gas turbine engine in its planned enclosure. Portions of the ICR program are a collaborative effort among the United States, British, and French navies. Memorandums of understanding, signed between the United States and the two other countries, relate to the development of an advanced, fuel efficient ship propulsion system to satisfy common operational requirements and meet emerging environmental emission standards. The memorandum of understanding with the United Kingdom calls for the joint development and qualification testing of the ICR engine. Specifically, the United Kingdom is responsible for providing an ICR test facility along with fuel, utilities, and manpower to support up to 2 years, or 1,500 hours, of developmental testing. The memorandum was signed on June 21, 1994, for a 5-year period. The 10-year memorandum signed by France, in August 1995, calls for the joint adaptation and testing of an ICR engine upgrade for reducing exhaust emissions. The U.S. Navy estimates the ICR program’s developmental total cost to be $415 million, with $223.6 million having been spent through fiscal year 1995. These amounts include foreign financial contributions of $15.8 million from the United Kingdom and $15 million from France. Although the Navy has classified the engine as a preplanned product improvement program for the DDG-51 destroyer, it will not decide on whether it will install the ICR engine on the destroyer until January 1997. The British and French navies are completing the design of a multinational frigate, known as the Horizon, and are considering the engine as its propulsion system. The only operational ICR test facility established, to date, is at Pyestock, United Kingdom. In a September 1995 letter to the Navy’s Deputy Chief of Naval Operations (Resources, Warfare Requirements and Assessments), the Navy’s Commander in Chief, Atlantic Fleet, recommended that the ICR engine not be funded in the future, noting that “in this year’s . . . budget process, the ICR Gas Turbine Engine Program stands out as a major cost without a realistic prognosis for long-term benefit.” He stated that the engine’s long-term cost-benefit projections are speculative at best and that its technology will most likely become obsolete before a return on its investment is realized. He also stated that the engine is not a viable candidate for existing ships due to its large size, weight, and cost. In an October 1995 reply, the Deputy Chief of Naval Operations stated that the Navy may decide the fate of the engine program as it finalizes its budget submission for fiscal year 1998. In November 1995, a high level Navy official informed us that the Navy’s need for the engine was marginal compared to other current priorities and that he believed the Center for Naval Analyses’ ICR report does not make a compelling economic case for the continued development of the engine. However, he also noted that the Department of Defense (DOD) supported the international aspects of the program and that the results of the upcoming developmental testing will be critical to determining the program’s future. In a September 1994 cost-benefit analysis for the Assistant Secretary of the Navy (Research, Development and Acquisition), the Center for Naval Analyses looked at the ICR engine and an improved version of the current DDG-51 engine. The report, which was prepared prior to the initial test of the engine and recuperator, states that “(t)he economic payoff for a fuel-efficient engine is so long-term that it might not be an attractive investment in the private sector, but the eventual benefits of either improved engine are not in doubt, only the near-term affordability.” The analysis stated that the Navy’s 1993 ship building plans for gas turbine surface ships are less then half what they were expected to be in 1987 and that such a large reduction could call into question the idea of a costly ICR engine development paid for by fuel savings. Further, the remaining development costs for the engine were significant. It would take until at least 2026 for the cost savings from the engine to equal the Navy’s investment, and the Navy needed to determine what priority it should give to the engine’s development. The analysis concluded that while existing contractual and political obligations would make cancellation of the engine an unpleasant choice, the high cost to develop the engine—estimated to be an average of $40 million per year through fiscal year 1999—means that program cancellation must be considered an option. In a December 1994 letter to the Chairman of the House Committee on Armed Services, the Secretary of the Navy stated that the report’s analysis supported the continued development of the ICR engine because of potential future fuel savings. According to the Navy, the ICR engine is expected to provide military advantages, such as increased range and time on station for the DDG-51, which the Navy considers desirable and which formed the basis for DOD’s approving the engine as a preplanned product improvement for the DDG-51. However, Navy officials have raised concerns about the viability of placing the engine on the DDG-51. Officials from the DDG-51 program office stated that the destroyer is currently equipped with a reliable gas turbine engine and that equipping it with the unproven ICR engine is a questionable decision. They noted that the Navy’s next generation surface combatant, planned for 2003, appeared to be a better candidate for the engine because it could be designed from the start to accept the engine. An ICR program official also called the ICR engine’s use on the DDG-51 questionable but noted that this decision gives the Navy an immediate need for the engine. He agreed that the Navy’s next generation surface combatant would be a better candidate since it could be designed to accept a new propulsion system. In 1992, we reported that the ICR program lacked adequate management controls, such as milestone reviews and comprehensive, independent cost estimates. In February 1994, the Under Secretary of Defense designated the ICR engine program as a preplanned product improvement for the DDG-51 destroyer in an effort to improve its management and ensure that it was subject to the approval of the Defense Acquisition Board and an independent cost estimate. The decision, however, on whether to actually use the ICR engine on the DDG-51 will not take place until January 1997. In addition, the first production engines would not be installed in a DDG-51 until over 7 years later, in 2004. The initial engine is expected to be ordered in 2001. If the engine is used on the DDG-51, the Navy now plans to put it on only the last nine destroyers to be built. The Center for Naval Analyses cost-benefit analysis of the ICR engine concluded that the engine should not be used on the DDG-51 due to the high cost to fit the engines on ships that were not designed for them and the small number of destroyers (14 at that time) remaining to be built. The analysis also noted that the projected break-even point between the cost savings generated by the engine and the Navy’s investment, based on 79 possible candidate ships (including the DDG-51 destroyers), would not occur until about 2026. If the engine is not put on the DDG-51, as the analysis recommends, then the number of identified candidate ships would be reduced to 65. In either case, the analysis noted that the cost of replacing the DDG-51 engine is significant. The analysis estimates that the cost to equip a new DDG-51 with two ICR engines is $12.4 million (in fiscal year 1994 dollars) more than the current engines. This increase in cost includes design, shipbuilding, and engine costs. This cost compares with the $4.9 million cost increase estimate for the other gas turbine engine in this study (a more fuel-efficient version of the current engine). The analysis suggests that the Navy confirm this estimate before acting on its recommendation. The Westinghouse contract requires the development of an ICR engine that will occupy the same space as the existing engine in the DDG-51. Current plans call for each new destroyer to be equipped with two ICR engines and two existing gas turbine engines. These plans present design and integration problems for the DDG-51 because the ship’s engine compartment will need to be redesigned to accommodate the larger ICR engine. Since the ICR engine module is expected to weigh two and one-half times more than the existing engine system, the engine compartment will require substantial modification to achieve the structural strength needed to support the added weight of the engine. In addition, with two different propulsion systems on each ship, the Navy will have to maintain individual logistics for each system. In March 1995, two shipyards building the DDG-51, Ingalls Shipbuilding Incorporated and Bath Iron Works Corporation, submitted reports to the Navy concerning the feasibility of installing an ICR engine in the DDG-51. Ingalls reported that while the installation was technically feasible, maintaining the ICR engine would be difficult because it has about 30 percent more preventive maintenance requirements than the current engine. Also, Ingalls reported “unlike the (current engine), most in-place maintenance activities will not be convenient or expeditious due to the very limited access to the ICR engine components.” Bath Iron Works concluded that replacing two of the present propulsion gas turbines with ICR gas turbines would have a significant negative impact on the ship and a clear potential for cost growth. The ICR engine’s recuperator, a critical component necessary for obtaining improved fuel economy, is experiencing serious developmental and testing problems. It failed after only 17 hours of testing with the engine in January 1995. The failure occurred almost 1-1/2 years after the Navy took the unusual step of initiating full-scale development of the engine concurrently with its advanced development. Since the failure, the ICR program has experienced technical and other problems that have severely affected program cost, schedule, and performance. The engine, without a recuperator, started developmental testing in July 1994. In December 1994, when the Navy first tested the engine with a recuperator, the engine demonstrated its potential effectiveness by increasing engine power from 7,000 horse power to 11,500 horse power with no increase in fuel consumption. In January 1995, however, the original recuperator failed after only 17 of 500 hours of planned testing. Test operations were terminated when a significant rise in the turbine inlet temperature occurred. This rise in temperature was attributed to the failure of the heat exchanger, within the recuperator, due to numerous air leaks. Westinghouse, the primary contractor, identified 26 different recuperator failures, many of which were due to basic flaws in the unit’s internal design and construction. In response, the Navy approved a contractor recovery plan to redesign the recuperator and requested an additional $11 million from Congress to fund this effort. As a result, the Navy extended the advanced development phase of the contract by 21 months, until September 1997. The plan allowed, however, key recuperator tests to be conducted concurrently with the redesign of the recuperator. One program official described the plan as aggressive while another told us that this was necessary to accomplish enough testing (such as a key 500-hour engine test) to support a planned late 1996 decision to order production engines for the Horizon frigate. Between March and November 1995, the Navy reduced projected program funding by $27.3 million between fiscal years 1996 and 2000. In November 1995, the Navy also ordered Westinghouse to stop work on designing and manufacturing later generation recuperators. The stop order was issued due to the decline in program funding and the inability of the contractor to meet the delivery date for the modified recuperator. This latter problem was due, in part, to continuing contractor quality control problems. According to the Navy, the stop work order reduced the amount of concurrency in the recuperator recovery program by allowing time to review and incorporate various test results into thermal computer models and evaluate test results from the modified recuperator. The Navy also requested that the contractor propose possible changes to current contract requirements, including revising the schedule, estimating cost by quarter, and eliminating test efforts related to integration of the ICR engine into the DDG-51. In response to the funding reduction and the stop work order, Westinghouse notified the Navy that while the technical problems associated with the recuperator were understood and solutions were in place, the engine’s development would be delayed an additional 20 months, until May 1999. In addition, Westinghouse recommended, among other things, that the number of preproduction engines used for developmental testing be reduced from five to two. Westinghouse also stated that cost growth has occurred and identified potential future development and production cost risks. Westinghouse agreed to provide the Navy an overall recuperator recovery strategy by May 1996. In March 1996, an ICR program official told us that the impact of the initial recuperator failure on the ICR program has been catastrophic and that the Navy has yet to recover from it. The Navy expects that the developmental program’s scope will be reduced, resulting in testing delays and cost growth. Navy and DOD officials told us, in commenting on our draft report, that while they believe significant progress has been achieved in solving the problems associated with the recuperator failure, the ICR development program will not recover its schedule slippage and that a technical recovery is only possible. In our September 1992 report, we stated that “without reliable estimates of both (1) the cost of acquiring the ICR engine and related technology and (2) the corresponding savings in operational cost that it might produce, it is our view that any return on the sizeable investment this program represents is speculative at best.” Our view remains unchanged because of concerns about the realism of the ICR engine’s development schedule and concurrency in the recovery plan test schedule; recognized difficulties in integrating the engine into DDG-51 fleet; the overall high cost of the program; and total program costs that are not fully covered in existing budget plans. Specifically, the Navy has not funded the cost to finalize and perform ICR developmental testing at an established U.S. facility ($17 million), to integrate an ICR engine into the DDG-51, or to retrofit a pilot ship for testing at sea. In addition, to keep total program costs at $415 million, the Navy plans to reduce the scope of its developmental test efforts and use funds intended for other test purposes to offset the expected $25 million recuperator recovery program cost. Also, in May 1994, the ICR engine contract was modified by deleting special tooling and special test equipment costs since the contractor agreed to fund these costs, if Navy funds were not available. The contractor is to maintain a separate account of these costs for future recovery. Future payment for such tools and equipment will obviously increase total program costs. A major factor that drove the ICR engine’s development schedule has been the need to decide, by late 1996, whether the engine will be used in the international Horizon frigate. The 1994 memorandum of understanding with the United Kingdom states that its goal was to move a critical ICR engine preproduction decision milestone to mid-1996, in order to advance the initial operating capability date for both the DDG-51 and the Horizon frigate. A Navy program official acknowledged that meeting that date was part of the reason the Navy approved an aggressive recuperator recovery plan, which included redesign of the recuperator before receiving the results of key tests. During 1995, the House Committee on Appropriations recommended, in its report on the fiscal year 1996 DOD appropriations bill, that the program be terminated because of concerns about serious technical problems, high unit cost, and program cost-effectiveness. In a July 13, 1995, letter to the Chairman of the House Appropriations Committee, the British Ambassador expressed his concern about funding for the ICR engine program. He stated that the United Kingdom plans to use the engine for its next generation of warships. He noted, however, that if U.S. funding for the ICR program was eliminated by Congress it would be incomprehensible to the British government and it could help encourage a movement toward a protectionist European defense market. In an August 28, 1995, letter to the same Chairman, the Secretary of Defense also expressed concern over the possibility that all ICR program funds would be deleted and that the program would be terminated in the House appropriations bill. Noting that the ICR engine is a candidate for all future nonnuclear Navy surface ships, he stated that the United Kingdom and France are committed to fielding the engine on their next generation of surface combatant ships and that termination of the program would be a potential embarrassment for the U.S. government. Full funding was restored to the program as a result of the conference committee meeting between the Senate and House. An additional $15.4 million, primarily for the recuperator recovery program, was also appropriated. Despite some progress made in improving the recuperator recovery plan, the test data necessary for decision making will still be limited. The original recuperator recovery plan recommended that a test unit and three additional generations of recuperators be manufactured during the developmental effort. Each would be designed with a longer service life than the previous one and would provide different solutions to address the failures. However, much of the testing of one generation would be conducted concurrently with the redesign of the next generation recuperator, thus severely limiting the contractor’s ability to improve the redesign based on test results. For example, a series of recuperator core component tests (there are eight of these heat exchanging cores in a recuperator) were scheduled simultaneously with the redesign of the next recuperator. To support the redesign efforts, a series of component tests are planned with a full-sized recuperator core. Such component testing had not been performed on the original recuperator due to the manufacturer’s attempt to meet delivery schedules for developmental testing. The manufacturer was behind due to (1) delays in awarding the subcontract to AlliedSignal, (2) refurbishing of the brazing furnace to satisfy safety requirements, and (3) manufacturing additional core units to replace poorly manufactured component units. The recovery plan concluded that these component tests “are crucial to support the design evolution of the core configuration and are more effective and provide earlier test data.” Due to delays in receiving these core component test results, which are necessary to validate model predictions, the Navy directed the contractor to stop testing the modified recuperator in January 1996. This action was necessary since the contractor had failed to provide substantiating data from the core tests to allow certain engine test maneuvers. Furthermore, in later correspondence, the Navy denied a particular test maneuver since the Navy believed the contractor’s proposed approach was inconsistent with the long-range requirement of extending the modified recuperator’s useful life for future testing. Within a week of being told to stop testing, the contractor resumed engine testing with the modified recuperator. A blue ribbon panel that reviewed the recovery plan determined that available test results were inadequate to predict future problem areas and the recuperator’s operational life and to validate performance models. Since the recuperator’s failure, the engine manufacturer has been testing the engine without a recuperator, further limiting the amount of available test data and the contractor’s ability to validate performance models and engine performance. According to Navy officials and documents, the need to have a propulsion system available for ships in development, especially the new multinational frigate, drove an aggressive recuperator recovery plan to redesign recuperators without the benefit of results from tests of individual cores and the environmental test data from a special test unit. Examination of the failed recuperator and additional materials test results, however, contributed to the design effort. The special test unit, which was created using six cores from the failed recuperator and two unused cores that had been set aside due to questionable manufacturing quality, replaced the failed recuperator. The test objectives of the special unit included the provision of data for refining, developing, and validating analytical computer models. Modeling new design concepts is a key factor in any developmental effort. The special unit operated for about 6 hours and demonstrated that the recuperator could be operated safely by gradually increasing engine power to obtain idle speed and having the recuperator partially active. The unit was extensively instrumented to gain detailed information about the operational environment. One of the recuperator recovery plan’s objectives was to deliver a redesigned recuperator to the Pyestock test facility by October 31, 1995, but an additional schedule slippage delayed delivery until December 1995. The slippage, however, enabled the Navy to obtain some additional preliminary core component test results that were used to establish boundaries as to what test operations will be performed. For example, the engine could not initially exceed 40 percent of full power nor could the contractor restart a hot engine without risking damage to the recuperator. As the Navy restructures the ICR engine’s development program, it faces two major decisions concerning the test program’s infrastructure. The first decision is how and if it will use an ICR test facility already built, but not operational, in Philadelphia. Prior to the recuperator failure, the Navy had hoped to advance significantly the development of the ICR engine by conducting joint testing in the United Kingdom and the United States. The second decision is whether it will test the ICR engine at sea in a pilot ship. Because of recuperator technical problems, funding reductions, and schedule delays, the Navy will not be able to accelerate engine development via planned joint land-based testing. Currently, the Navy plans to conduct almost all of its ICR engine developmental testing at the test site in the United Kingdom. In addition, it has yet to resolve questions related to the need to test the engine at sea. The Navy signed an advanced development phase contract with Westinghouse in 1991. In developing an ICR test facility, Westinghouse considered three potential test sites and selected Pyestock, United Kingdom, as its primary test site. The subsequent memorandum of understanding with the United Kingdom provided for the United Kingdom to fund the operation of the test site for up to 2 years or 1,500 hours of testing. This in-country support was estimated to total $22 million in then-year U.S. dollars. The test facility in Pyestock began testing the ICR engine (without a recuperator) in July 1994. When the Navy advanced the ICR engine’s development schedule in 1993 by 21 months, it created a need for another ICR land test site. Both the Navy and Westinghouse believed that with two operational facilities they could conduct almost simultaneous engine tests in support of the faster development schedule. Based on the memorandum of understanding with the United Kingdom, the United States would be responsible for funding the Philadelphia test site. This test site would also perform required technical and operational testing for the U.S. Navy. While the Philadelphia test facility was completed in fiscal year 1995, it is not yet operational. This is due, in part, to funding reductions and recuperator technical problems that have resulted in major delays in the developmental testing of the engine. As a result, there is currently no ICR engine and recuperator available for testing at Philadelphia. In addition, the Navy has not provided adequate funding for the operation of the Philadelphia facility in support of desired joint developmental and qualification engine testing. Complicating the situation is the fact that the recuperator failure and the subsequent 41-month delay in the development program have eliminated one of the primary justifications—to speed up the engine’s development—for two land-based test facilities. The Navy now plans for almost all developmental and qualification testing to be conducted in Pyestock and, at the present time, use the Philadelphia facility near the end of the program only for ICR engine shock testing. The Navy and Westinghouse had originally expected that the Philadelphia test facility would allow a second 500-hour developmental test after a similar test had been performed at Pyestock. By conducting these tests almost simultaneously, the Navy believed it could complete the engine’s development 21 months early. The Philadelphia facility cost $5.4 million to construct. The Navy estimates the cost to fully equip and staff the Philadelphia test facility for a 500-hour test to be $17 million: $9 million in fiscal year 1996 and $8 million in fiscal year 1997. In the fiscal year 1996 budget, however, the Navy only received $4.5 million for this test. Navy officials told us that they would not partially fund this test and that the $4.5 million is currently being withheld by the Navy and may, sometime in the future, be rescinded. The Navy also had planned to conduct ICR related testing at another test facility in Philadelphia. Using the DDG-51 test facility, which is built and operating, the Navy was going to accomplish tests required for integrating the ICR engine into that class of ship. Because of funding reductions and other problems, the Navy is considering eliminating this testing. Also, the ICR test facility was to have been used to test other future ship propulsion and power projects. If the facility is not made operational, this will not be possible. Thus, the Navy currently has an ICR test facility without operational capability and an ICR engine test strategy that is in a state of limbo. The Navy has not decided if it will test the ICR at sea because of the high cost involved. It estimates that it would cost between $5.8 million to $12.5 million to redesign a ship’s engine room and install an engine in a pilot ship. While no decision has been made, this is an important testing issue. A DDG-51 program official stated that it is Navy policy to test engines at sea. A Navy testing official stated that a land-based test facility, by itself, is not adequate to fully evaluate the engine’s operational effectiveness and suitability because the facility does not represent a realistic ship and maritime environment. This is, in part, because the engine compartments on surface combatants are very limited in space compared to other surface ships (e.g., cargo ships), thereby presenting more challenges for repairing or maintaining the engine. Also, the Navy has not decided what type of pilot ship the engine will be tested on. A Navy official stated that the type of pilot ship selected is important due to the various electronic support equipment associated with the engine. This report raises many questions about the viability of the ICR engine program, and we believe DOD needs to reassess the need for and future direction of the program. Because the United States has entered into joint agreements with the British and French navies to develop this engine, the decisions on the future of the program are complicated and sensitive. We also believe that the use of the engine on the DDG-51 destroyer is inappropriate. Therefore, we recommend that the Secretary of Defense reassess the Navy’s continuing need for the new engine. In doing so, the Secretary needs to carefully consider how current agreements with U.S. allies affect the program, identify what effect the Navy’s ongoing efforts to restructure and rebaseline the ICR program will have, and determine what the Navy’s surface combatant ship future requirements actually are. If it is determined that the program should continue, the Secretary of Defense should direct the Secretary of the Navy to not use the engine in the DDG-51 destroyer; determine total program costs for developing and acquiring the engine relative to the Navy’s requirements for future surface combatant ships, including costs for U.S. test facilities and/or pilot ship engine testing; prepare a facility use plan for the U.S. test site; and prepare a test plan and schedule for the engine that provide sufficient assurance that it can transition from development to production and be realistically available for use in any U.S. ship. DOD said that it disagreed with our report, in large part, because the Secretary of Defense is satisfied with the Center for Naval Analyses’ assessment and does not need to reassess the program at this time, as we have recommended. However, DOD’s comments do not address the difficulties the program has encountered since the Center’s 1994 assessment. Specifically, with the January 1995 failure of the engine’s recuperator, the program has experienced serious design, manufacturing, and quality assurance problems. In response, the Navy instituted an aggressive recuperator recovery plan to maintain as much of the engine’s accelerated development schedule as possible. Only in November 1995, however, did the Navy realize that this approach would not work and ordered work stopped on redesigning the recuperator. An ICR program official has described the impact of the recuperator failure on the program as being catastrophic and, as of May 21, 1996, the stop work order was still in effect. DOD also disagreed with our recommendation to not use the engine in the DDG-51 destroyer. DOD commented that the ICR engine was expected to provide military advantages to the DDG-51, such as increased range and time on station. While acknowledging that weight and size relative to the size of the DDG-51 engine room are important, DOD commented that it is technically feasible to put the engine on the ship. DOD did not comment, however, on our concerns about the high cost of putting the engine on the DDG-51. We would like to reiterate that the Center’s assessment recommended that the ICR engine not be used in the DDG-51 because of the high cost to fit these engines in a ship that was not designed for them. Moreover, representatives from the DDG-51 program office, and even the ICR program office, have said that this is an inappropriate ship for the ICR engine. DOD commented that studies done by Ingalls Shipbuilding demonstrate that putting the ICR engine on the destroyer is technically feasible. However, DOD’s comments fail to note that Bath Iron Works concluded that the engine would have a significant negative impact on the ship and a clear potential for cost growth. Further, weight is clearly an issue when DOD tells us, in its technical comments, that the Navy will, if necessary, reduce the amount of fuel carried on the destroyer to counter the increased weight of the engine. Concerning our recommendation to determine total program costs for developing and acquiring the engine, DOD stated that total program costs have been estimated and the ICR engine should break even around about 2020. However, when we attempted to follow up on this statement, we learned that, as of April 1996, the Navy had yet to restructure and rebaseline the program. The Navy is in the process of restructuring the program to absorb the estimated $25 million cost to implement the recuperator recovery plan and expected reductions in out-year program funding. To accomplish this, the Navy is considering, among other things, reducing the number of preproduction engines, under the contract, from five to two. In addition, the Navy has not fully funded all of the test activities, including the U.S. land-based test facility and a pilot ship to test the engine at sea. It may even eliminate the planned DDG-51 integration testing. Thus, total program costs are not fully known and we are concerned about what test activities the Navy plans to reduce or eliminate to keep total program costs down. In addition, the Center for Naval Analyses actually predicts the break-even point, when savings would equal the development cost, as being in 2026, not 2020 (based on the first engines being installed in a fiscal year 1999 DDG-51). Under current Navy plans, however, ICR engines would not be installed until 2004, meaning that the break-even point is likely to occur even later then 2026. Concerning our recommendation to prepare a test plan and schedule that provides sufficient assurance that the engine can transition from development to production and be realistically available for future use, DOD stated that if the decision was made to use the ICR engine on a particular ship, test planning and scheduling would be incorporated into that ship’s test and evaluation master plan. However, with the decision to put the engine on the DDG-51 at least a half year away and a stop work order still in effect, we are more concerned about the current test plan and schedule for the engine’s development. Until the Navy restructures and rebaselines the program we will be unable to determine if test concurrency has been eliminated and if adequate time has been provided for developmental testing and the evaluation of test results. DOD also pointed out that the United States has no cognizance over the Horizon program and that the United Kingdom, France, and Italy would develop their own test plans. We have revised our recommendation to specify that it only applies to U.S. ship development. After carefully reviewing all of DOD’s comments, we continue to believe that the Secretary of Defense needs to reassess the ICR engine program and the Navy needs to resolve problems with the ICR engine’s recuperator and sufficiently test the engine prior to committing to its production, particularly since there appears to be no pressing U.S. requirement. We are also concerned about the growing cost of the program and, in particular, the cost to acquire and install the engine in the DDG-51. While the U.S. Navy has entered into cooperative agreements with the United Kingdom and France, it is still funding about 93 percent of the engine’s estimated $415 million development cost. Program restructuring, schedule slippage, and expected cost increases will add to that amount. The decision of the program office to advance the engine’s schedule by concurrently conducting advanced development along with full-scale development 1-1/2 years prior to testing the engine and recuperator together and then, after the recuperator failure, to initiate an aggressive recuperator recovery plan heightens our concern about program management. We also are concerned that the Navy may significantly reduce the testing of the ICR engine in an attempt to offset program cost growth and the additional cost caused by the recuperator failure. We also continue to question the Navy’s proposal to put the engine on the DDG-51 and its decision to manage the program as a DDG-51 preplanned product improvement. DOD’s comments are presented in appendix I. In addition, DOD provided, for our consideration, several factual and technical corrections related to the report. In response, we have made changes to the report where appropriate. To obtain information for this report, we reviewed various program research and development documents, including the recuperator recovery plan, early concept and feasibility design studies, test plans and schedules, various development contracts and other program documents. We interviewed officials in the offices of the Under Secretary of Defense (Comptroller/Chief Financial Officer), DOD’s Director of Operational Test and Evaluation, Assistant Secretary of the Navy (Research, Development and Acquisition), Navy’s Operational Test and Evaluation Force, and the Naval Sea Systems Command’s Advanced Surface Machinery Program, Engineering Division, Land Based Engineering Site, and Naval Surface Warfare Center. We also reviewed various DDG-51 program documents and interviewed officials in the offices of the Under Secretary of Defense (Acquisition and Technology) and the DDG-51 Program Office. We also discussed our report with the Naval Audit Service. To assess and analyze the risks associated with the recuperator recovery plan, we attended two ICR bimonthly technical conferences where program officials and contractors discussed technical and testing issues, including engine performance, testing problems, and the recuperator recovery program. We compared information obtained at these conferences with various program and technical documents. We conducted our review from May 1995 to May 1996 in accordance with generally accepted government auditing standards. We are also sending copies of this report to the Chairmen and Ranking Minority Members, House Committees on National Security and on Government Reform and Oversight, Senate Committees on Armed Services and on Governmental Affairs, and Senate and House Committees on Appropriations; the Director of the Office of Management and Budget; and the Secretaries of Defense and the Navy. We will also provide copies to others upon request. This report was prepared under the direction of Thomas J. Schulz, Associate Director, Defense Acquisition Issues. Please contact him or me on (202) 512-4841 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. Robert L. Coleman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the Navy's intercooled recuperated (ICR) engine program, focusing on the: (1) Navy's need for the engine; (2) cost, schedule, and performance of the program; and (3) impact of the Navy's test and development strategies. GAO found that: (1) some Navy officials are questioning the economic viability of the ICR engine program and have raised concerns over placing ICR engines on naval destroyers, since most destroyers are equipped with reliable propulsion systems; (2) engine development costs pose a significant economic investment; (3) some officials believe the engine should not be used on naval destroyers given the small number of new U.S. destroyers involved, adequacy of current destroyer engines, high cost of incorporating the engine, uncertainty of future integration plans, and current state of ICR development; (4) the Navy has not recovered from initial recuperator failure that resulted from design, manufacturing, and quality assurance problems; (5) a contractor is instituting a recovery plan to redesign future recuperators, but the plan is not allowing sufficient time to evaluate test data prior to ordering production ICR engines; (6) the Navy has interrupted work on redesigning future recuperators because of funding reductions, contractor quality control problems, manufacturing problems, and delivery delays; and (7) the Navy needs to decide how and when it will use the Philadelphia ICR test facility and if it will test the ICR engine at sea.
|
The 1958 Geneva Convention on the High Seas and the United Nations Convention on the Law of the Sea share the same definition of piracy, and, under that definition, piracy consists of any of several acts, including any illegal act of violence or detention, or any act of depredation, committed for private ends by the crew or the passengers of a private ship and directed against another ship, aircraft, persons, or property onboard another ship on the high seas; or against a ship, persons or property in a place outside the jurisdiction of any state. Additionally, according to both conventions, all states have the duty to cooperate to the fullest extent possible in the repression of piracy on the high seas or in any other place outside the jurisdiction of any state. Furthermore, both conventions authorize states to seize pirate ships or a ship under the control of pirates and arrest the persons and seize the property onboard, on the high seas or in any other place outside the jurisdiction of any state. In addition, a single piratical attack often affects the interests of numerous countries, including the flag state of the vessel, various states of nationality of the seafarers taken hostage, regional coastal states, owner states, and cargo owner, transshipment, and destination states. Somali pirates attack and harass vessels transiting the Indian Ocean and in the Gulf of Aden, a natural chokepoint that provides access to the Red Sea and the Suez Canal and through which over 33,000 ships transit each year. Pirates operate from land-based enclaves along the 1,880-mile coastline of Somalia, which is roughly equivalent to the distance from Portland, Maine, to Miami, Florida. Figure 1 illustrates the vast area in which incidents of piracy are occurring, 1,000 nautical miles from Somalia's coast. Figure 1 also shows the location of the Internationally Recommended Transit Corridor in the Gulf of Aden, where coalition forces have established naval patrols to help ensure safe passage for transiting vessels. To conduct their attacks, Somali pirates generally use small skiffs, carrying between four and eight persons armed with AK-47 rifles or similar light arms and, at times, with rocket-propelled grenades. Once they target a vessel, pirates typically coordinate a simultaneous two- or three-pronged attack from multiple directions. Depending on the characteristics and acquiescence of the victim vessel, pirates can board and commandeer a vessel in less than 20 minutes. Pirate vessels usually are equipped with grappling hooks, ladders, and other equipment to assist the boarding of a larger craft. Pirate vessels vary in sea-worthiness and speed with some able to travel at speeds between 25 and 30 knots and operate in high sea conditions, while others have more restricted capabilities. According to the Office of Naval Intelligence, Somali pirates do not typically target specific vessels for any reason other than how easily the vessel can be boarded. Pirates patrol an area and wait for a target of opportunity. Vessels that travel through the high-risk area at a speed of less than 15 knots and have access points close to the waterline are at higher risk of being boarded and hijacked. According to a June 2010 self-protection guide published by maritime industry organizations, there have been no reports of pirates boarding ships proceeding at speeds over 18 knots. Figure 2 shows U.S. authorities boarding a suspected pirate skiff. Unlike pirates in other parts of the world, Somali pirates kidnap hostages for ransom and, up to this point, have not tended to harm captives, steal cargo, or reuse pirated ships for purposes other than temporarily as mother ships. Mother ships are typically larger fishing vessels often acquired or commandeered by acts of piracy that pirates use to store fuel and supplies, and tow skiffs, which allow them to operate and launch attacks further off shore. This “hostage-for-ransom” business model is possible in part because the pirates have bases on land in ungoverned Somalia where they can bring seized vessels, cargoes, and crews and have access to food, water, weapons, ammunition, and other resources during ransom negotiations. In an ungoverned state with widespread poverty, the potential for high profits with low costs and relatively little risk of consequences has ensured that Somali pirate groups do not lack for recruits and support. Moreover, some U.S. and international officials suspect that Somali businessmen and international support networks may provide financing, supplies, and intelligence to pirate organizations in exchange for shares of ransom payments. In addition to posing a threat to the lives and welfare of seafarers, piracy imposes a number of economic costs on shippers and on governments. Costs to shippers include ransom payments, damage to ships and cargoes, delays in delivering cargoes, increased maritime insurance rates, rerouting vessels, and hardening merchant ships against attack. According to officials at the Departments of State and Defense, governments incur costs by conducting naval patrols, as well as the costs of transporting, prosecuting, and incarcerating suspected and convicted pirates. The United States’ National Strategy for Maritime Security, issued in 2005, declares that the United States has a vital national interest in maritime security. The strategy recognizes that nations have a common interest in facilitating the vibrant maritime commerce that underpins economic security, and in protecting against ocean-related terrorist, hostile, criminal, and dangerous acts, including piracy. The National Strategy for Maritime Security also requires full and complete national and international coordination, cooperation, and intelligence and information sharing among public and private entities to protect and secure the maritime domain. The 2007 Policy for the Repression of Piracy and other Criminal Acts of Violence at Sea states that it is the policy of the United States to “continue to lead and support international efforts to repress piracy and urge other states to take decisive action both individually and through international efforts.” In December 2008, the NSC developed the Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan) to implement the 2005 strategy and the 2007 policy as applied to piracy off the Horn of Africa. The Action Plan establishes three main lines of action for interagency stakeholders to take to repress piracy in collaboration with industry and international partners: (1) prevent pirate attacks by reducing the vulnerability of the maritime domain to piracy; (2) disrupt acts of piracy consistent with international law and the rights and responsibilities of coastal and flag states; and (3) ensure that those who commit acts of piracy are held accountable for their actions by facilitating the prosecution of suspected pirates by flag, victim, and coastal states, and, in appropriate cases, the United States. The NSC—including the Maritime Security Interagency Policy Committee—develops policy for the U.S. response to piracy off the Horn of Africa. The Action Plan directed the Secretary of State and Secretary of Defense to establish a high-level interagency, operational task force—the Counter-Piracy Steering Group—to coordinate, implement, and monitor the actions centered in the Action Plan. In addition, the NSC directed that the Departments of Defense, Homeland Security, Justice, State, Transportation, and the Treasury and the Office of the Director of National Intelligence contribute to, coordinate, and undertake initiatives in accordance with the Action Plan, subject to available resources. Figure 3 shows the U.S. departments and agencies involved in implementing the three lines of action contained in the Action Plan. The Department of State (State) is involved in efforts to prevent acts of piracy and hold pirates accountable, primarily by leading U.S. interaction with international partners working through the Contact Group, building regional judicial capacity to prosecute suspected pirates, and encouraging states to prosecute when their interests are involved. Additionally, State is involved in efforts to disrupt acts of piracy by tracking ransom payments and following financing issues related to piracy. Within Defense, U.S. Naval Forces Central Command is involved in prevention, interdiction, and prosecution efforts by contributing forces to the Combined Maritime Forces, an international maritime coalition. Within the Combined Maritime Forces, Combined Task Force 151 conducts counterpiracy operations in international waters, including the Red Sea, the Gulf of Aden, the Gulf of Oman, the Arabian Gulf and the waters off the Somali coast in the Indian Ocean. The Naval Criminal Investigative Service supports and assists interdiction and prosecution efforts by conducting incident investigations, supervising detention of suspected pirates, assisting U.S. and international prosecutions, debriefing released crews, and providing criminal intelligence information. U.S. Africa Command assists in preventing piracy through strategic communication efforts and building partner capacity in regional states and would plan and, if authorized, conduct any land-based military activities in Somalia to interrupt pirate operations. U.S. Africa Command also conducts counterpiracy naval patrols and interdiction efforts in its area of responsibility. Treasury is involved in disrupting pirates’ revenue sources by examining pirate financial activity and implementing an executive order to block the assets of certain persons. Justice is involved in holding pirates accountable through prosecution as well as judicial capacity-building in African states. The Coast Guard, under Homeland Security, helps prevent piracy through its work with and regulation of the U.S. shipping industry and assists in interrupting piracy by providing law enforcement units and boarding teams on Navy vessels. Transportation’s Maritime Administration assists with preventing piracy by working with the shipping industry to develop best practices for the industry to protect itself from piracy. In addition, within the intelligence community, the Office of Naval Intelligence–as part of the National Maritime Intelligence Center—provides maritime intelligence assistance. The international community, shipping industry, and international military forces also have been involved in taking steps to prevent and disrupt acts of piracy off the Horn of Africa, and facilitate prosecutions of suspected pirates. Over the past few years, the United Nations adopted a number of United Nations Security Council resolutions related to countering piracy in the Horn of Africa region, including resolutions 1816 which authorizes states to enter the territorial waters of Somalia in coordination with the Somali Transitional Federal Government, for the purpose of repressing acts of piracy and armed robbery at sea, and to use all necessary and appropriate means to repress acts of piracy and armed robbery within Somali territorial waters. In January 2009, the Contact Group on Piracy off the Coast of Somalia (Contact Group) formed under the auspices of United Nations Security Council Resolution 1851, and facilitates discussion and coordination of actions among states and organizations to suppress piracy off the coast of Somalia. In addition, in February 2009 organizations representing the interests of ship owners, seafarers, and marine insurance companies worked to publish the first version of voluntary commercial vessel self-protection measures to avoid and respond to pirate attacks, referred to as “best management practices.” In May and September 2009, 10 countries signed the New York Declaration, and committed to (1) promulgate the internationally recognized best management practices for self-protection to vessels on their registry and (2) ensure that vessels on their registry have adopted and documented appropriate self-protection measures in their ship security plans when carrying out their obligations under an existing international agreement. The United States also has provided forces and leadership to the Combined Maritime Forces, which is a coalition of 25 contributing nations that are working to conduct maritime security operations in the region. In January 2009, the Combined Maritime Forces established Combined Task Force 151, a multinational naval task force with the sole mission of conducting counterpiracy operations in the Gulf of Aden and the waters off the Somali coast in the Indian Ocean. That role previously had been filled by Combined Task Force 150, which continues to perform counterterrorism and other maritime security operations as it has since 2001. There are 11 nations that have participated and several others that have agreed to send ships or aircraft or both to participate in Combined Task Force 151. In addition, the United States has contributed assets to the North Atlantic Treaty Organization’s counterpiracy effort since its inception. Its current effort, Operation Ocean Shield, focuses on at-sea counterpiracy operations and offers assistance to regional countries in developing their own capacity to combat piracy activities. Moreover, as part of the Combined Maritime Forces, the United States also works with the European Union, which conducts counterpiracy operations and escorts World Food Programme vessels delivering humanitarian aid to countries in the region, as well as independent deployers not part of the coalition that escort vessels and patrol area waters. Figure 4 shows many of the key international and industry partners involved in the response to piracy off the Horn of Africa with whom the United States collaborates and coordinates. More information on international and shipping-industry partners is included in appendix III. According to officials at State and Justice, the United States will consider prosecuting suspected pirates in appropriate cases when U.S. interests are directly affected, such as what occurred when suspected pirates attacked the U.S.-flagged ships MV Maersk Alabama, USS Nicholas, and USS Ashland. When suspected pirates are captured by U.S. forces and Justice determines not to prosecute the case in the United States, the United States works with the affected states and regional partners to find a suitable venue for prosecution. In January 2006, 10 suspected pirates were captured by U.S. forces after they hijacked the Indian-flagged dhow Safina al Bisarat and used it to attack the Greek-owned and Bahamian-flagged Delta Ranger. This was the first incident where U.S. forces captured suspected pirates in the region and transferred them into the custody of Kenya. As of July 2010, the United States had formalized two arrangements with regional states—Kenya and the Seychelles—to facilitate the transfer and prosecution of suspected pirates. The United Nations Office on Drugs and Crime, the International Maritime Organization, and individual governments have assisted in developing the judicial capacity of regional states. U.S. agencies have made progress implementing the NSC’s Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan) to lead and support international efforts to counter piracy, but the effort faces several implementation challenges. The United States has made the most progress on working with partners to implement efforts to prevent attacks, such as by encouraging the shipping industry to transit in areas patrolled by international navies. However, the U.S. government has had less success in other areas. For example, the United States has not disrupted pirate bases on shore, and the international community has made only limited progress to disrupt pirates’ revenue and prosecute suspected pirates. While many stakeholders credit international, industry, and U.S. government efforts with contributing to a decline in the percentage of successful attacks that resulted in a vessel boarding or hijacking, since 2007 pirates have increased their total number of attacks, become more organized, and greatly expanded their area of operations. Meanwhile, the Action Plan has not been updated to address these changes since it was published in December 2008, and the U.S. government has not evaluated the costs or effectiveness of its counterpiracy efforts or reported on the results of the interagency effort. In collaboration with their international and industry partners, U.S. agencies have taken steps across the three lines of action established in the Action Plan to: (1) prevent attacks by reducing the vulnerability of the maritime domain, (2) disrupt acts of piracy in ways consistent with international law and the rights and responsibilities of coastal and flag states, and (3) ensure that those who commit acts of piracy are held accountable for their actions by facilitating the prosecution of suspected pirates. The Action Plan establishes the U.S. role in countering piracy as a collaborative one, seeking to involve all countries and shipping-industry partners with an interest in maritime security. For U.S. agencies, the Action Plan states that, subject to available resources, the Departments of Defense, Homeland Security, Justice, State, Transportation, and the Treasury, and the Office of the Director of National Intelligence will contribute to, coordinate, and undertake initiatives in accordance with the Action Plan. The NSC also establishes some limits to the scope of the plan by focusing on immediate measures to reduce the incidents of piracy, rather than longer-term stabilization of Somalia that the Action Plan asserts is needed to fully repress piracy. Our review focused on the steps U.S. agencies have made to repress piracy off the Horn of Africa, but given the international nature of the issue, our analysis frequently refers to the related efforts of international and industry partners. We found that, of the 14 total tasks established within the three lines of action in the Action Plan, substantial progress has been made in implementing 4 tasks, the majority of which are related to preventing piracy. The United States has made some progress toward implementing 8 other tasks, including all of the tasks involved in facilitating the prosecution of suspected pirates. Little or no progress has been made with regard to 1 task that relates to disrupting acts of piracy, and we did not assess 1 task because agencies decided it would duplicate the efforts of international partners and should not be implemented. Figure 5 summarizes the results of our assessment. For more detailed information about U.S. agencies’ efforts to implement the Action Plan and our analysis of their progress, see appendix II. In collaboration with its international and industry partners, the U.S. government has made substantial progress overall toward implementing Action Plan tasks aimed at preventing acts of piracy. First, the United States has been a key contributor among the 49 countries participating in the Contact Group, including leading a working group on industry self- protection. Second, State, Defense, Coast Guard, and the Maritime Administration, in collaboration with international and industry partners, also have made substantial progress on the second task to encourage commercial vessels to transit high-risk waters through the Maritime Security Patrol Area, which includes the Internationally Recommended Transit Corridor patrolled by international naval forces. Third, the U.S. government has made substantial progress to ensure shippers update U.S.- flagged vessels’ ship security plans to address the pirate threat, and in encouraging the crews of commercial vessels to use industry-developed self-protection measures to prevent piracy, often referred to as “best management practices.” These practices include adding physical barriers to obstruct pirates from boarding a vessel and taking evasive maneuvers to fend off attack. Despite these and other actions to prevent attacks, U.S. government and shipping industry officials stated that ensuring all vessels transiting the area implement best management practices remains a challenge. The Coast Guard has developed regulations mandating self-protection measures, but these regulations only apply to U.S.-flagged vessels, which comprise a small portion of the total shipping traffic transiting the region. The shipping industry has developed a document outlining self-protection measures, but implementation is voluntary. While government and shipping industry officials lack data on the extent to which best management practices are used, they estimate that about a quarter of the vessels are not using one of the easiest and least costly of the best practices, registering their passage with a naval coordination center in the region, which raises questions about the extent of their implementation of the other practices. Coast Guard, the Maritime Administration, and shipping industry officials stated it may be challenging to find additional ways to encourage the remaining vessels to self-protect from attack. Regarding the Action Plan’s fourth task aimed at preventing piracy, we determined that U.S. agencies have made some progress on strategic communication, described in the Action Plan as a global information campaign to highlight the destructive elements of piracy and the international efforts to coordinate a response to the problem. While U.S. agencies have taken steps in this area, State has yet to finalize a strategic communication plan to coordinate interagency communications efforts to counter piracy. Defense officials stated that the lack of a U.S. presence in Somalia presents additional challenges to efforts to communicate with the Somali population to discourage piracy and for measuring the effectiveness of U.S. communication efforts. While the United States and its international partners have made substantial progress overall on the task of providing forces and assets capable of interdicting pirates off the Horn of Africa and have made some progress on the tasks related to seizing and destroying pirate vessels, supporting regional arrangements to counter piracy, and disrupting pirate revenue, U.S. agencies have made little or no progress toward implementing the task related to disrupting and dismantling pirate bases. We found that the U.S. Navy and Coast Guard have made substantial progress contributing assets and leadership to coalition forces patrolling the Gulf of Aden and Indian Ocean. According to Defense officials, typically, more than 30 ships from coalition, European Union, North Atlantic Treaty Organization, and independent forces patrol the region at any given time, with the United States contributing between 4 and 5 ships per day on average. In addition, consistent with the Action Plan, U.S. forces have responded to and successfully interdicted pirate attacks. For example, in April 2009, U.S. forces successfully terminated the hostage situation that occurred when pirates attacked the U.S.-flagged MV Maersk Alabama and kidnapped the vessel’s captain. U.S. forces intervened and freed the captain after killing all but one of the pirates conducting the attack. However, as pirate activity has expanded to the larger Indian Ocean, U.S. and international military officials stated that providing an interdiction capable force similar to that provided in the Gulf of Aden is not feasible. Though coalition forces developed guidance for improving coordination of forces in the Indian Ocean, Defense officials emphasized that there are not enough naval vessels among all of the combined navies in the world to adequately patrol this expansive area for pirates. Moreover, Defense officials acknowledged that there are other competing U.S. national interests in the region, such as the ongoing wars in Iraq and Afghanistan as well as counterterrorism missions that require the use of the limited naval and air assets that are used to monitor and gather intelligence for counterpiracy operations. In addition, the U.S. government has made some progress to seize and destroy pirate vessels and equipment, and deliver suspected pirates for prosecution. For example, U.S. forces have contributed to coalition forces that confiscated or destroyed almost 100 pirate vessels. However, U.S. forces have encountered more difficulty in delivering captured suspected pirates to states willing and able to ensure they are considered for prosecution. From August 2008 to June 2010, international forces subsequently released 638 of 1,129 suspected pirates, almost 57 percent of those captured, in part because of the difficulty finding countries that were willing or able to prosecute them. Further, the United States has made some progress on the task to disrupt pirate revenue. In April 2010, President Obama signed an executive order that blocks assets of certain persons, including two suspected pirates, who have engaged in acts that threatened the peace, security or stability of Somalia. However, according to officials at Treasury, the department charged with implementation, the executive order applies only to assets subject to U.S. jurisdiction, and U.S. efforts to track and block pirates’ finances in Somalia are hampered by the lack of government and formal banking institutions there and resulting gaps in intelligence. The U.S. government has made some progress on the task to support “shiprider” programs and other agreements. The United States has supported some bilateral and regional counterpiracy arrangements, most notably the International Maritime Organization’s effort to conclude a regional arrangement, generally referred to as the Djibouti Code of Conduct. This arrangement contains provisions related to information sharing regarding pirate activity among the signatories, reviews of national legislation related to piracy, and provision of assistance between signatories. However, U.S. agencies have made little progress on the second part of this task to develop shiprider programs, in which regional law enforcement officials accompany naval patrols to collect evidence to support successful prosecutions. Justice officials explained that the potential benefits do not warrant the resource investment the programs would require. Specifically, the presence of shipriders would not significantly enhance the ability of regional countries to prosecute suspected pirates. State and Defense officials report that no steps have been made to disrupt and dismantle pirate bases ashore in part because the President has not authorized this action, the United States has other interests in the region that compete for resources, and long-standing concerns about security hinder the presence of U.S. military and government officials in Somalia. While the United States has not supported the creation of a Counter-Piracy Coordination Center, as called for in the Action Plan, we did not provide a progress assessment for this task since government and industry officials have stated that existing organizations and coordination centers currently fulfill the incident reporting and monitoring functions, and that establishing a new center would duplicate those efforts. While the United States has made some progress on implementing the tasks established in the Action Plan to hold pirates accountable, the United States and its international partners have only prosecuted a small number of pirates overall for a variety of reasons. As of July 2010, Kenya and the Seychelles were the only regional partners that accepted transfers of suspected pirates from U.S. forces for purposes of prosecution. According to officials from State, the reluctance of affected states to prosecute and limited judicial capacity in the region are barriers to the ability of the U.S. government to make substantial progress on the task of concluding prosecution arrangements. Officials also noted that the facts and circumstances of each encounter differ, with not all cases eliciting evidence that could be brought to court. As already described, these factors contributed to the release of almost 57 percent of the suspected Somali pirates that international forces encountered from August 2008 to June 2010. The United States has made some progress on the task to support and encourage the exercise of jurisdiction under the Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation as a framework to prosecute suspected pirates. For example, the United States has used the convention while prosecuting one pirate in the United States. The U.S. government has also supported and encouraged the use of other applicable conventions and laws by exercising jurisdiction over 11 suspected pirates who attempted an attack on U.S. warships. However, Defense, State, and Justice officials reported that the United States and its international partners have faced significant challenges in encouraging countries to prosecute pirates, due to a lack of political will or judicial capacity, such as an inadequate number of attorneys to prosecute the cases. Lastly, on the task to enhance the capabilities of regional states to accept suspected pirates for prosecution, the U.S. government has provided assistance to several regional states, and the United States has contributed to international efforts to build regional judicial capacity. For example, according to State officials, the United States has worked with the government of Tanzania to allow pirates to be prosecuted there even when cases lack a domestic connection. However, regional states continue to have a limited capacity to prosecute suspected pirates and incarcerate convicted pirates. While many stakeholders anecdotally credit international, industry, and U.S. government efforts with preventing and disrupting piracy off the Horn of Africa, from 2007 through the first half of 2010 piracy has evolved in many ways—pirates increased their attacks, claimed more hostages and revenue from shipping industry’s ransom payments, expanded their area of operations, and became more organized. As figure 6 illustrates, the total number of reported pirate attacks increased from 30 in 2007 to 218 in 2009. These reported attacks include four attempts on U.S.-flagged vessels in 2009, one of which was successful—the attack on the MV Maersk Alabama. However, the rate of successful attacks, or the proportion of total reported attacks that resulted in vessel boardings or hijackings, decreased from around 40 percent in 2008 to 22 percent in 2009. U.S. and international officials interpret this as a sign that the efforts of the shipping industry, governments, and the international naval patrols to prevent or disrupt attacks are having a positive effect on the situation. In addition, in the first 6 months of 2010, reports of total attacks declined to about 100 attacks, as compared with 149 attacks during the first half of 2009. However, other data show that piracy remains a persistent problem. For example, as figure 7 shows, the number of hostages of various nationalities captured by Somali pirates from 2007 to 2009 more than quintupled. The total number of hostages includes 21 hostages from the U.S.-flagged MV Maersk Alabama in 2009. Furthermore, in the first half of 2010, pirates took 529 hostages compared to 510 in the first half of 2009. In addition, pirates have expanded their area of operations with an increasing number of attacks occurring in the Indian Ocean, an area much larger to patrol than the Gulf of Aden. By the end of 2008, when the NSC issued its Action Plan, approximately 83 percent of the 111 reported pirate attacks off the Horn of Africa that year took place in the Gulf of Aden, an area just over 100,000 square miles, with the remainder off the coast of Somalia. However, just a year later in 2009, only 53 percent of the 218 total attacks occurred in the Gulf of Aden as Somali pirates expanded their area of operations to the broader Indian Ocean. Pirates now threaten an area of nearly 2 million square nautical miles in the Somali Basin, Gulf of Aden, and Northern Arabian Sea. Figure 8 shows the number and location of pirate attacks off the Horn of Africa reported to the International Maritime Bureau in 2007, 2008, 2009, and the first half of 2010. While the Action Plan cites attacks as far as 450 miles from Somalia’s coast, in April 2010 the International Maritime Bureau reported that pirates had increased their capability to attack and hijack vessels to more than 1,000 nautical miles from Somalia using mother ships, from which they launch smaller boats to conduct the attacks. International officials stated that piracy in the Indian Ocean is more challenging due to the great expanse of water, and requires a different approach than that used in the Gulf of Aden. One U.S. Navy analysis estimated that 1,000 ships equipped with helicopters would be required to provide the same level of coverage in the Indian Ocean that is currently provided in the Gulf of Aden—an approach that is clearly infeasible. Although U.S. and international officials have expressed concern that international support networks may be providing pirate groups with financing, supplies, and intelligence in return for shares of ransom payments, as of March 2010 the intelligence community assessed that Somali pirates are not receiving funding or coordination from non-U.S. foreign sources outside Somalia, aside from ransom payments. Defense supports FBI and Treasury efforts to monitor whether there is U.S.-based support for piracy. Figure 9 shows that from 2007 to 2009 the estimated amount of total ransom payments paid to pirates by the shipping industry increased from about $3 million to $74 million, with the average amount of ransoms paid per vessel increasing from $300,000 to more than $2 million. A December 2008 United Nations report revealed characteristics of structural organization in piracy operations, including evidence of pirate leaders and financiers who supply the equipment and provisions for other pirates to carry out the attacks, and that ransom payments are distributed according to organizational roles. In addition, State, Defense, and FBI officials observed that piracy off the Horn of Africa has become more organized, and Defense officials said that gathering more information about pirate organizations that could be used to identify pirate leaders would be beneficial. FBI officials noted that pirate organizations lack the sophistication associated with other types of organized crime, such as the American mafia. These officials stated that the FBI continues to investigate potential ties Somali pirates may have to individuals outside of Somalia. Moreover, U.S. officials have expressed repeated concerns that funds generated by piracy have the potential to attract extremists or terrorists located in the region to become involved in piracy. Treasury, Justice, State, and Defense are monitoring piracy on an ongoing basis to determine if there is a link between pirates and extremist organizations, but as of July 2010 had found no credible link. The Action Plan’s objective is to repress piracy in the interest of the global economy, among other things, but the effectiveness of U.S. resources applied to counterpiracy is unclear because the interagency group responsible for monitoring the Action Plan’s implementation was not specifically charged with tracking the cost of U.S. activities or systematically evaluating the relative benefits or effectiveness of the Action Plan’s tasks and neither the interagency steering group nor the federal agencies involved have performed these tasks. Our prior work has shown that federal agencies engaged in collaborative efforts need to evaluate activities to identify areas for improvement. Moreover, as pirates have adapted their tactics, the Action Plan has not been revised. The U.S. government is not systematically tracking the costs or the benefits and effectiveness of its counterpiracy activities to determine whether its investment has achieved the desired results or should be revised. According to officials at State and Defense, the interagency Counter-Piracy Steering Group, which is jointly led by these two agencies and charged with monitoring implementation of the Action Plan, has not been systematically monitoring the cost or evaluating the benefits or effectiveness of U.S. counterpiracy efforts. In commenting on a draft of this report, Defense stated that the interagency group was not performing these functions because it was not specifically charged to do so in the Action Plan. Instead, State officials told us the group primarily provides a forum for U.S. agencies to coordinate efforts before multilateral Contact Group meetings or discuss ongoing initiatives such as the development of the April 2010 executive order on Somalia. Officials from Justice, Treasury, Coast Guard, and State reported that the NSC’s Maritime Security Interagency Policy Committee, a high-level interagency group that is focused on maritime issues, generally tracks U.S. progress toward implementing the Action Plan and discusses status updates on piracy provided from the various agencies. However, the officials were not aware of systematic efforts to track the costs, or evaluate the benefits or effectiveness of U.S. counterpiracy activities. Table 1 describes selected costs we identified that may be incurred by U.S. agencies for counterpiracy efforts. While most of the agencies involved had not systematically tracked the cost of their counterpiracy efforts, Defense developed a partial estimate. Defense officials estimated that U.S. Central Command’s counterpiracy operations for fiscal year 2009 totaled approximately $64 million for costs associated with 773 U.S. Navy ship steaming days, flight hours to support ships operating in the area, port costs, and those related to detaining and delivering suspected pirates to proper authorities. However, officials said this estimate does not include estimates for costs incurred for counterpiracy operations by other combatant commands such as U.S. Africa Command. In addition, Defense officials noted the deployment of naval forces in support of the counterpiracy operations takes the ships, crew, aircraft, intelligence assets, and other forces away from other global missions such as counterterrorism and counternarcotics efforts. In addition to not tracking the costs of U.S. counterpiracy efforts, U.S. agencies also are not evaluating the benefits of U.S. counterpiracy efforts to U.S. interests. While the Action Plan discusses the United States’ national security interest in maintaining freedom of navigation of the seas in order to facilitate vibrant maritime commerce, the extent to which counterpiracy benefits U.S. interests and maritime commerce has not been evaluated. The Maritime Administration reports that piracy may pose costs to the maritime industry for protecting vessels from being attacked or hijacked. For example, industry may incur costs for rerouting ships to avoid pirate-infested waters, higher insurance premiums, or enhancing vessel security by hiring private security guards or installing nonlethal deterrent equipment. Ultimately, according to the Maritime Administration, any costs incurred would be passed along to the taxpayer and the consumer. However, agencies are not systematically evaluating the extent to which the U.S. investment in counterpiracy operations is benefiting maritime commerce or weighing these benefits against the costs incurred to conduct counterpiracy operations. In addition, data show that the number of U.S. ships operating in the region is low. The Coast Guard reports that, at any given time, there are about six to eight U.S.-flagged vessels operating in the region and the chance of a commercial vessel being attacked by pirates in the Gulf of Aden is estimated to be less than 1 percent. Furthermore, according to the Maritime Administration, vessels carrying commerce to the United States are less susceptible to piracy given their high speed. Moreover, in 2009, the Congressional Research Service reported that despite the increased threats and estimates of rising costs associated with piracy off the Horn of Africa, the effect on the insurance industry appeared negligible and U.S. insurance rates had not changed. The Action Plan also establishes objectives related to repressing piracy and reducing incidents of piracy, but it does not define measures of effectiveness that can be used to evaluate progress toward reaching those objectives, or assess the relative benefits or effectiveness of the Action Plan’s tasks to prevent, disrupt, and prosecute acts of piracy. Further, the Action Plan does not specify what information the NSC or other designated interagency groups should use to monitor or evaluate to determine progress, or assess benefits or effectiveness. Agency officials have cited several challenges associated with measuring the effectiveness of U.S. efforts, including the complexity of the piracy problem, difficulty in establishing a desired end-state for counterpiracy efforts, and difficulty in distinguishing the effect of U.S. efforts from those of its international and industry partners. Nevertheless, U.S., international, and industry officials we spoke with attributed the decrease in the pirates’ rate of successful attacks in 2009 and shift to the Indian Ocean to U.S. and international prevention and interdiction efforts. We previously have reported that performance information is essential to the ability of decision makers to make informed decisions, and that specifying performance metrics can be one tool in evaluating the effectiveness of government efforts in a changing environment. Identifying measures of effectiveness and systematically evaluating the effectiveness of agency efforts could assist the U.S. government in determining the costs and benefits of their activities to ensure that resources devoted to counterpiracy efforts are being targeted most effectively, and whether adjustments to plans are required. Without information on the magnitude of U.S. resources devoted to counterpiracy operations, or the benefits or effectiveness of its actions, the U.S. government is limited in its ability to weigh its investment of resources to counter piracy off the Horn of Africa against its other interests in the region. The lack of systematic evaluation of costs, benefits, and effectiveness also makes it difficult for agencies to target and prioritize their activities to achieve the greatest benefits. We have previously reported that agencies should identify the human, information technology, physical, and financial resources needed to initiate or sustain a joint effort among multiple agencies, as one means of enhancing interagency collaboration. In addition, a discussion of resources, investments, and risk management is an important characteristic of national strategies that can enhance their usefulness to resource and policy decision makers and resource managers. Moreover, despite the expansion of pirate attacks over a vastly larger geographic area, increased ransom demands and payments, and better organized pirate activities since the Action Plan was written, according to U.S. government officials, there are no plans to reassess the Action Plan in order to determine whether it should be revised. Currently, the Action Plan does not specifically address how to counter pirates in the broader Indian Ocean or what methods to use to meet its objective of apprehending leaders of pirate organizations and their financiers. U.S. agencies have reported taking some steps to respond to the changing methods and location of pirate attacks. For example, the Navy issues weekly updates on piracy incidents to inform mariners and naval forces, which in 2010 have cautioned that pirates are operating at considerable distances off the coast of Somalia. Defense officials also have worked with coalition partners to develop a coordination guide for operations in the Somali Basin and have described measures they have taken to interdict and destroy pirate mother ships. However, according to Coast Guard, Treasury, and Justice officials, as of April 2010, the Maritime Security Interagency Policy Committee affirmed the overall course of U.S. counterpiracy efforts and did not identify a need to modify the current approach to countering piracy. Furthermore, the Action Plan contains tasks such as those to create a Counter-Piracy Coordination Center and support shiprider programs that are no longer being pursued by U.S. agencies because they have determined that these tasks are not needed or would not be beneficial. We have established in prior work that federal efforts are implemented in dynamic environments in which needs must be constantly reassessed, and that agencies can enhance and sustain collaborative efforts by, among other things, developing mechanisms to report on results. By continually evaluating its approach to countering piracy off the Horn of Africa and reporting on results of its counterpiracy efforts to key stakeholders, the United States may be in a better position to hold agencies accountable for results and achieve its ultimate goal of repressing piracy. U.S. agencies have generally collaborated well with international and industry partners to counter piracy, but they could implement other key collaborative practices for enhancing and sustaining collaboration among U.S. interagency partners. According to U.S., international, and industry stakeholders, U.S. agencies have collaborated effectively with international and industry partners through mechanisms and organizations to counter piracy off the Horn of Africa. The United States also has collaborated well with international military partners and industry groups. Within the U.S. government, while agencies have implemented some collaborative practices, other practices could be implemented to further enhance collaboration. The U.S. government has not made substantial progress on those Action Plan tasks that involve multiple agencies and those in which the NSC has not clearly identified roles and responsibilities or coordinated with U.S. agencies to develop joint guidance. U.S. agencies, primarily State and Defense, have collaborated with international partners through two new organizations established to counter piracy off the Horn of Africa: the Contact Group on Piracy off the Coast of Somalia (Contact Group) and the Shared Awareness and Deconfliction meetings. As previously discussed, the Action Plan directed U.S. agencies to establish and maintain a Contact Group, which serves as an international forum for countries contributing to the counterpiracy effort to share information. State orchestrates U.S. participation in the Contact Group, coordinating with officials from Defense, Justice, Homeland Security, Transportation, and Treasury. As part of the Contact Group, the United States has participated in six plenary meetings with international partners in counter piracy efforts since January 2009. These meetings have facilitated international military coordination, provided guidance to international efforts, and established a trust fund to support counterpiracy efforts. As part of the Contact Group’s efforts, the Coast Guard and the Maritime Administration cochair a working group focusing on coordinating with the shipping industry, which has reviewed and updated best management practices for industry self-protection, encouraged continued communication between industry and government organizations such as the Maritime Security Centre–Horn of Africa, and is developing guidance for seafarer training regarding pirate attacks. In addition, officials told us that State has participated in the working group on strategic communication and assisted in developing draft strategic communication documents considered by the group. The United States also has worked to establish collaborative organizations, share information, and develop joint guidance for international military partners working to counter piracy. As the leader of the Combined Maritime Forces, in 2008 the U.S. Navy, along with other international partners, established the Shared Awareness and Deconfliction meetings that are intended to provide a mechanism for militaries active in the region to share information on their movements and make efficient use of the limited naval assets patrolling pirate-infested waters. We observed one of these meetings that occur every 4 to 6 weeks with representatives from the European Union, North Atlantic Treaty Organization, and the shipping industry, as well as with nontraditional partners from countries such as Russia and China. According to U.S. and international officials, these meetings have improved coordination and led to the creation of the Internationally Recommended Transit Corridor within the Maritime Security Patrol Area as well as coordination guides for military operations in the Gulf of Aden and the Somali Basin. The coordination guides provide joint guidance to participating international forces intended to ensure the most effective use of the military assets in the region by outlining shared practices and procedures. The United States has also worked to support information sharing efforts on investigative and prosecutorial techniques. In July, 2010, the Naval Criminal Investigative Service hosted a workshop on counterpiracy investigations that was attended by over 50 representatives from the United States, international military, law enforcement, and industry organizations. According to Defense officials, this workshop facilitated development of a draft investigators manual designed to help standardize counterpiracy operations. U.S. agencies, primarily the Coast Guard and the Maritime Administration, have worked with industry partners to facilitate collaborative forums, share information, and develop joint guidance for implementing counterpiracy efforts. Industry partners play an important role in preventing and deterring pirate attacks since they are responsible for implementing self -protection measures on commercial vessels. According to officials, in late 2008 the Coast Guard and the Maritime Administration encouraged industry groups to develop best practices for industry to counter piracy and hosted several meetings with U.S. and international industry groups. According to U.S. and shipping industry officials, these meetings resulted in the industry-published best management practices guide. This document has provided critical guidance to ship owners and operators on how to protect themselves from pirate attacks. In addition, for those ship owners who choose or are required to carry armed security teams, the Coast Guard and State have worked to identify viable methods for doing so in accordance with applicable U.S., international, and port- state laws. The Coast Guard has communicated methods for taking arms on ships and the responses from international partners to the shipping industry through two port security advisories. As the U.S. agency responsible for implementing national and international maritime security regulations on U.S.-flagged vessels, the Coast Guard also has hosted four collaborative forums with industry partners to address piracy issues since April 2009. These meetings have provided a forum to discuss changes required to ship security plans to address the piracy threat, the evolving piracy situation, and U.S. efforts to assist in protecting U.S.-flagged vessels. For example, the Coast Guard facilitated a meeting with industry representatives and officials from State and Treasury in April 2010 to discuss the executive order on Somalia, which has implications for the shipping industry’s ability to pay ransoms to secure the release of captive crews. Further, the Maritime Administration developed training courses to inform vessel crews about how to help prevent piracy and steps to take if taken hostage. In addition, the Maritime Administration and the Military Sealift Command have created a new collaborative mechanism for working with industry in the form of Anti-Piracy Assistance Teams. When requested by the owner of a U.S.-flagged vessel, a team consisting of the Maritime Administration and the Naval Criminal Investigative Service personnel will assess a ship’s security and offer advice on ways to improve. When the teams visit a vessel, Maritime Administration officials meet with company officials to discuss their security efforts and document these efforts so they can be shared with other ship operators. Lastly, U.S. Central Command has used the Maritime Liaison Office based in Bahrain as an additional mechanism to exchange information between naval forces and industry. This office serves as a conduit for information focused on safety of shipping and conducts outreach with the shipping industry, such as through newsletters to encourage the use of self-protection measures. U.S. government agencies have implemented some collaborative practices in working with interagency partners to counter piracy but could enhance efforts where less progress has been made by incorporating other key practices. Several key practices than can enhance interagency collaboration include developing an overarching strategy, establishing collaborative mechanisms to share information with partners, assigning roles and responsibilities, and developing joint guidance to implement interagency efforts. Consistent with key practices, the NSC established its Action Plan, which serves an overarching strategy to guide U.S. interagency efforts and provides a framework for interagency collaboration. The Action Plan creates an interagency task force that is intended to coordinate, implement, and monitor the actions contained in the plan. In addition, the U.S. departments and multiple component agencies involved in counterpiracy efforts have also implemented another key practice—using collaborative organizations to share information. Collaborative organizations that provide adequate coordination mechanisms to facilitate interagency collaboration and achieve an integrated approach are particularly important when differences exist between agencies that can impede collaboration and progress toward shared goals by potentially wasting scarce resources and limiting effectiveness. Within the NSC, which includes committees with agency secretaries, deputy secretaries, and assistant secretaries, are existing forums for discussing and coordinating interagency efforts that officials have reported discuss counterpiracy efforts. Additionally, as called for in the Action Plan, State and Defense established the Counter-Piracy Steering Group, which includes representatives from the U.S. departments and component agencies involved in counterpiracy efforts. Furthermore, in certain circumstances, such as a pirate attack on a U.S.- flagged vessel, the U.S. government uses the existing Maritime Operational Threat Response process to facilitate a discussion among U.S. agencies and decide on courses of action, which is outlined in an October 2006 plan that is part of the National Strategy for Maritime Security. For example, when the MV Maersk Alabama was attacked in April 2009, facilitators utilized established protocols to activate the process and bring together the appropriate government officials. Figure 10 shows U.S. authorities responding to the MV Maersk Alabama incident in 2009. According to U.S. and Maersk officials involved, over the course of several meetings—some of which included Maersk representatives—U.S. officials decided on actions to take in response to the attack, resulting in the release of a U.S. merchant marine captain that had been taken hostage by pirates. U.S. and Maersk officials considered the outcome of the Alabama incident to be a success. Officials from Defense, State, Coast Guard, the Maritime Administration, and Justice have reported that this process has been an effective tool in responding to this and other piracy incidents. In addition, the Coast Guard established a new collaboration mechanism—a weekly interagency conference call—to coordinate operational efforts among the agency partners working to counter piracy, which we observed during this review. Although the NSC and U.S. agencies have taken these collaborative steps, the NSC could incorporate two other key practices—assigning roles and responsibilities and developing joint implementation guidance—to further enhance interagency collaboration in counterpiracy efforts. As of July 2010, the NSC had only assigned roles and responsibilities for implementing 1 of the 14 Action Plan tasks. The Action Plan recognizes that, consistent with other U.S. mission requirements, the U.S. Navy and the Coast Guard provide persistent interdiction through their presence and can conduct maritime counterpiracy operations. In addition, the Action Plan states that those forces shall coordinate counterpiracy activities with other forces operating in the region to the extent practicable and sets out a number of specific actions to be taken in various piracy situations. Although the Action Plan states that the Departments of Defense, Homeland Security, Justice, State, Transportation, and the Treasury, and the Office of the Director of National Intelligence shall contribute to, coordinate, and undertake initiatives in accordance with the Action Plan, the NSC did not clearly identify roles and responsibilities for specific agencies that will ensure the implementation of the other 13 tasks in the Action Plan. Establishing roles and responsibilities can help agencies clarify which agencies will lead or participate in activities, help organize their joint and individual efforts, and facilitate decision making. Agencies could enhance collaboration by developing joint guidance to implement and coordinate actions on several Action Plan tasks. Joint guidance helps ensure that agencies involved in collaborative efforts work together efficiently and effectively by establishing policies, procedures, information-sharing mechanisms, and other means to operate across agency boundaries. Effective joint guidance also addresses how agency activities and resources will be aligned to achieve goals. In the absence of clearly identified roles and responsibilities and joint implementation strategies, agencies involved in countering piracy have made comparatively more progress in implementing those Action Plan tasks that fall firmly within one agency’s area of expertise, such as those to establish a Contact Group, update ship security plans, and provide an interdiction-capable presence, than they have on those tasks for which multiple agencies may be involved. For example, State, which has the authority and capability to work with international partners in establishing the Contact Group, has made substantial progress toward implementing that task. Furthermore, the Action Plan calls for commercial vessels to review and update their ship security plans in order to prevent and deter pirate attacks. Officials explained that because the Coast Guard has responsibility for enforcing U.S.-regulated commercial-vessel compliance with maritime security requirements, the agency took the lead on implementing this task and has made substantial progress. Similarly, Defense has primary responsibility for providing a persistent interdiction- capable presence in the region and has made substantial progress as lead on that task. In contrast, there are several tasks in the Action Plan for which multiple agencies have relevant authorities, capabilities, or interests, and on which less progress has been made. The NSC did not identify roles and responsibilities for implementing these tasks and officials have acknowledged that the agencies have not developed joint guidance to ensure their efforts work together efficiently and effectively. For example, the NSC included efforts related to developing a strategic communications strategy, disrupting pirate revenue, and holding pirates accountable as essential to implementing the Action Plan. Strategic communication: The Action Plan calls for the United States to lead and support a global public information and diplomatic campaign to highlight, among other things, the international cooperation undertaken to repress piracy off the Horn of Africa, as well as piracy’s destructive effects on trade, human and maritime security, and the rule of law. In addition, according to the Action Plan, any strategic communication strategy must also convey concerns about the risks associated with paying ransoms demands. Multiple agencies are involved in communicating with various audiences about piracy. State communicates with international partners about international cooperation; Defense communicates with military partners about international military cooperation and with African audiences to discourage piracy; the Naval Criminal Investigative Service communicates with U.S. and international law enforcement partners about law enforcement, investigative, and analytical cooperation; and the Coast Guard and the Maritime Administration communicate with the shipping industry about self-protection measures and ransom concerns. However, there is no governmentwide strategic communication plan in place to guide agency efforts, optimize effects, and enhance the achievement of goals. According to State officials, State has drafted a governmentwide counterpiracy strategic communication plan for interagency review but as of July 2010, the department was still awaiting comments from interagency partners and did not have an estimated date for when the plan would be finalized, though Treasury officials had provided comments. Meanwhile agencies have taken varying approaches to strategic communication. Defense has developed a classified plan for its activities, and according to Coast Guard officials, the Coast Guard suspended its effort to develop a plan upon learning that State was drafting a governmentwide plan. As a result, U.S. agencies have not implemented all the strategic communication efforts called for by the Action Plan, and it is not clear that the agencies’ efforts are coordinated or as effective as possible in communicating the intended messages about piracy. Disrupting pirate revenue: According to the Action Plan, the goal for disrupting pirate revenue is to trace ransom payments and apprehend leaders of pirate organizations and their enablers. Multiple agencies are involved in collecting information on pirate finances. Justice collects information on financial assets entering the United States related to piracy. According to officials, Treasury examines financial activities and reviews intelligence, law enforcement, and publicly available information, to map illicit financial networks and to determine appropriate action, including potential designation of an individual or entity pursuant to the April 2010 executive order on Somalia. State officials described their work with international partners to gather information on illicit financial networks, while Defense officials told us they collect intelligence on pirate financial activities by questioning captured pirate suspects. However, the NSC did not clearly identify any agency with specific responsibility for disrupting pirate revenue. As a result, officials at Justice, State, and Defense agree that information their agencies gather on pirate finances is not being systematically analyzed, and it is unclear if any agency is using it to identify and apprehend pirate leaders or financiers. In addition, though Justice, State, and Defense officials reported that Somali piracy exhibits characteristics of international organized crime, currently pirate attacks prosecuted by the United States are not investigated by the FBI’s Organized Crime Section but instead by the Violent Crimes Section. In the absence of clearly identified roles and responsibilities, and with competing priorities, officials indicated agencies have not taken initiative to develop joint guidance to ensure these disparate efforts work together efficiently and effectively. Similarly, officials acknowledged there is no supporting plan or joint guidance to direct U.S. interagency efforts to collect and analyze criminal intelligence on pirates. However, State is in the process of creating a Counter-piracy Finance Working Group intended to facilitate closer interagency coordination of efforts to combat the financial flows and support networks of piracy off Somalia. According to Justice officials, as of July 2010, the United States has not apprehended or prosecuted the leaders of any pirate organizations or their enablers as called for in the Action Plan. Facilitating prosecution of suspected pirates: The Action Plan contains several tasks related to facilitating the prosecution of suspected pirates by parties with an interest in prosecution, but it does not identify clear roles and responsibilities for U.S. agencies needed to ensure implementation of these tasks. In some cases, U.S. officials said roles are apparent where an agency’s mission aligns with the Action Plan’s tasks, such as State’s diplomatic work with regional partners to conclude prosecution arrangements. However, a lack of defined roles and joint guidance to implement U.S. efforts to facilitate prosecutions poses challenges for prosecuting suspected pirates when each agency’s role is less clear. For example, absent defined roles and responsibilities and interagency guidance, U.S. officials explained that they had to dedicate time during a high-level interagency meeting of the Maritime Security Interagency Policy Committee to arrange details, including cost sharing, for the transportation of suspects after the spring 2010 pirate attacks on the USS Ashland and USS Nicholas. State officials told us that prior to these attacks the U.S. government had limited experience being involved with the prosecution of Somali pirates and had not established the necessary interagency procedures for transferring suspects and sharing costs among the agencies involved. By enhancing interagency collaboration, the NSC can reduce the risk of leaving gaps in its counterpiracy efforts or the risk that agency efforts may overlap, which could waste resources that could be applied to combat other threats to national security, such as terrorism. Clarifying roles and responsibilities and developing joint implementing guidance could also help agency officials—who must balance their time and resources among many competing priorities—more fully and effectively carry out their roles in helping to repress piracy and avoid duplication of effort. Given that the President identified piracy as a threat to U.S. national security interests and that it is a complex problem that affects a variety of stakeholders, the U.S. government has taken a collaborative approach in its counterpiracy plans. The U.S. government has taken many steps to implement the Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan), but still faces a number of challenges to meeting the Action Plan’s objective of repressing piracy, including inherent limits on its ability to influence industry and international partners and persuade other states to consider prosecuting suspected pirates. In addition, the United States must address the problem of piracy in an environment in which counterpiracy efforts compete with other high- priority U.S. interests in the region, and the NSC acknowledges that longer-term efforts to stabilize Somalia are needed to fully address the root causes of piracy. In the face of such challenges, the NSC’s Action Plan provides a roadmap for federal departments and agencies to follow in implementing efforts to counter piracy. However, the U.S. government is not tracking the costs, benefits, or effectiveness of its counterpiracy activities and thus lacks information needed to weigh resource investments. In addition, without a systematic evaluation of interagency efforts to compare the relative effectiveness of various Action Plan tasks, key stakeholders lack a clear picture of what effect, if any, its efforts have had. Establishing performance measures or other mechanisms to judge progress and evaluating performance information could provide U.S. government stakeholders with more specific information to update the Action Plan and better direct the course of U.S. government plans and activities to repress piracy. Without updating U.S. government plans and efforts to reflect performance information and the dynamic nature of piracy, the U.S. government is limited in its ability to ensure that efforts and resources are being targeted toward the areas of greatest national interest. Federal agencies have made great strides to collaborate with each other and with international and shipping-industry partners, but could benefit from greater specificity in the Action Plan about their roles and responsibilities and development of joint implementing guidance, especially with regard to those Action Plan tasks that require a variety of stakeholders to implement. Without specific roles and responsibilities for essential aspects of its Action Plan—including developing a U.S. government strategic communication plan, disrupting pirate revenue, or facilitating prosecution of suspected pirates—U.S. agencies have either developed their own approaches to these tasks or developed no approach at all. In addition, developing joint implementing guidance could help agencies work together more effectively and potentially improve progress toward U.S. goals. To improve U.S. government efforts to implement the Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan), enhance interagency collaboration, provide information to decision makers on results, and better target resources, we recommend that the Special Assistant to the President for National Security Affairs, in collaboration with the Secretaries of Defense, Homeland Security, Justice, State, Transportation, and the Treasury take the following four actions: reassess and revise the Action Plan to better address evolving conditions off the Horn of Africa and their effect on priorities and plans; identify measures of effectiveness to use in evaluating U.S. counterpiracy efforts; direct the Counter-Piracy Steering Group to (1) identify the costs of U.S. counterpiracy efforts including operational, support, and personnel costs; and (2) assess the benefits, and effectiveness of U.S. counterpiracy activities; and clarify agency roles and responsibilities and develop joint guidance, information-sharing mechanisms, and other means to operate across agency boundaries for implementing key efforts such as strategic communication, disrupting pirate revenue, and facilitating prosecution. We provided a draft of this report for review to the Departments of Defense, Homeland Security, Justice, State, Transportation, and the Treasury; and the National Security Council (NSC). The NSC did not provide comments on the report or our recommendations. Defense provided written comments to clarify facts in the report which are reprinted in their entirety in appendix V. Defense, Homeland Security, Justice, State, Transportation, and Treasury provided technical comments which we incorporated as appropriate. In written comments, Defense stated that the department does not agree that using percent of seized suspected pirates who were delivered for prosecution is an appropriate measure of program success. Defense also commented that the metric does not take into account that it is up to individual countries within the coalition to determine the validity of evidence and decide whether to prosecute. We did not state that the percent of suspects delivered for prosecution was an appropriate measure of program success. In the draft report, we stated that the Action Plan establishes objectives related to repressing piracy and reducing incidents of piracy, but does not define measures of effectiveness that can be used to evaluate progress toward reaching those objectives. In the absence of defined measures of effectiveness, we made qualitative assessments of U.S. government progress in implementing the Action Plan tasks by reviewing program documents, analyzing data, and interviewing agency officials. We determined that the U.S. government had made some progress on the Action Plan task to seize and destroy pirate vessels and related equipment and deliver captured suspected pirates for prosecution. In response to Defense’s comments, we have modified the report to explicitly recommend that the NSC identify measures of effectiveness to use in evaluating U.S. counterpiracy efforts. We also revised the summary text contained in figure 5 for this line of action to better incorporate some of the prosecution challenges discussed in appendix II and more fully address the rationale for our assessment. Defense also provided comments to better depict the contributions of the Naval Criminal Investigative Service to counterpiracy operations which we incorporated throughout the report. And finally, Defense stated that U.S. Special Operations Command does not conduct counterpiracy operations and stated in its technical comments that it is a force provider to other combatant commands who are responsible for conducting counterpiracy operations. As a result, we modified the draft to eliminate reference to the U.S. Special Operations Command as incurring costs for counterpiracy operations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 7 days from its date. At that time, we will send copies of this report to the Special Assistant to the President for National Security Affairs; the Attorney General; the Secretaries of Defense, Homeland Security, State, Transportation, and the Treasury; and interested congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact either John H. Pendleton at (202) 512-3489 or [email protected] or Stephen L. Caldwell at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To address our objectives, we analyzed data, reviewed documentation, and interviewed officials from the U.S. government agencies that the National Security Council (NSC) specifically tasked to contribute to, coordinate, and undertake initiatives in accordance with NSC’s 2008 Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan). We met with and gathered information from officials representing the various agencies tasked with implementing the Action Plan and who participate on the committees within the NSC. We also conducted work with international and industry partners involved in the response to piracy off the Horn of Africa. To assess the extent to which the U.S. government has made progress in countering piracy off the Horn of Africa and the challenges it faces, we reviewed the Action Plan, the 2007 Policy for the Repression of Piracy and other Criminal Acts of Violence at Sea, the 2005 National Strategy for Maritime Security, relevant U.S. laws, United Nations Security Council resolutions on piracy off the Horn of Africa, as well as our prior work related to Somalia, maritime security, interagency collaboration, and combating illicit financing. To assess the implementation status of the actions called for in the Action Plan, we reviewed program documents, analyzed data, and interviewed agency officials. Our assessments are based on data from multiple sources, are qualitative in nature, and are derived from consensus judgments. We assessed “substantial progress” for those tasks where all components specified by the Action Plan were implemented; “some progress” for tasks where components were partially implemented or agencies had taken steps toward implementation; and “little or no progress” where agencies had made minimal or no effort toward implementing the components of the task. We provided a “not applicable” assessment for one task in the Action Plan that agency officials and our analysis revealed to have been overtaken by events and no longer relevant for U.S. counterpiracy efforts. We provided a summary of our progress assessments to the agencies and incorporated their comments as appropriate. We also reviewed our prior work related to results-oriented government and evaluated the extent to which the interagency Counter-Piracy Steering Group charged with coordinating, implementing, and monitoring the actions in the NSC plan followed select key practices for achieving results including monitoring and evaluating efforts, using performance information to improve efforts and revise plans as needed, and reporting on results. In addition, we met with international and industry partners involved in developing best practices for protecting ships from pirate attack, working with the international Contact Group, and participating in naval patrols off the Horn of Africa to gain their perspective on the challenges and progress in countering piracy, the effectiveness of counterpiracy actions, implementation of best management practices for protecting ships, and how conditions off the Horn of Africa are evolving. To gain insight on trends in pirate activity since the United States and coalition partners began counterpiracy operations, we obtained and analyzed data on the incidents of piracy off the Horn of Africa for the years 2007 through June 2010 from the International Chamber of Commerce’s International Maritime Bureau. The International Maritime Bureau operates a Piracy Reporting Center that collects data on pirate attacks worldwide. According to its officials, there are some limitations with International Maritime Bureau data because they rely on ship officials to provide the information, which can vary, and some information is not provided due to sovereignty issues regarding investigations. However, we reviewed internal controls and measures used by the bureau to protect the reliability and accuracy of their data on pirate attacks and attempted attacks and discussed the reliability of the bureau’s data with international, industry, and government subject-matter experts involved in counterpiracy operations and determined that the bureau’s data were the best data available on pirate attacks and attempted attacks. Therefore, we determined the data were sufficiently reliable for the purpose of describing the context of piracy as a threat to seafarers and the geographical scope of pirate attacks off the Horn of Africa. To identify the results of interdiction efforts led and supported by the United States we obtained and reviewed data from the Combined Maritime Forces for the years 2008 to June 2010. There are some limitations with Combined Maritime Forces’ data because these data are compiled from military and nonmilitary sources and reporting. Although efforts are made to correlate and confirm the accuracy of these data, Combined Maritime Forces cannot fully guarantee their accuracy. We discussed data-collection methods, processes for data entry, and the steps taken to ensure reasonable accuracy of the data with both the International Maritime Bureau and the Combined Maritime Forces. We determined the data to be sufficiently reliable for the purposes of this report. To identify the amount of ransoms being paid to Somali pirates we reviewed monthly ransom data from the Office of Naval Intelligence for 2007 through 2009. Due to the classified nature of the sources and methods used to develop this data, we did not independently verify the reliability of this information. To identify the extent to which U.S. government agencies are collaborating with each other and with international and industry partners, we synthesized key practices for enhancing and sustaining collaboration on complex national security issues from our prior work. We then evaluated the extent to which department and agency actions incorporate select key practices including (1) developing overarching strategies and mutually reinforcing plans, (2) assigning roles and responsibilities, and (3) creating collaborative organizations that share and integrate information. To obtain information on the nature and extent of collaboration on counterpiracy efforts among agencies, international and industry partners, we reviewed the NSC’s Action Plan, and department and agency program documents; and interviewed agency, international, and industry officials. To gain insight into new and existing coordination mechanisms applicable to piracy, we observed the weekly interagency conference calls on counterpiracy efforts, attended a Shared Awareness and Deconfliction meeting in Manama, Bahrain, and reviewed program documents. For both of our objectives, we interviewed and, where appropriate, obtained documentation from officials with the following U.S. government agencies: Within the Office of the Under Secretary of Defense (Policy): the Assistant Secretary of Defense for Special Operations/Low-Intensity Conflict and Interdependent Capabilities (Counter-Narcotics and Global Threats), the Oceans Policy Advisor in the Office of the Assistant Secretary of Defense for Global Strategic Affairs (Countering Weapons of Mass Destruction), and the Office of the Assistant Secretary of Defense for International Security Affairs (African Affairs) Under the Joint Chiefs of Staff: J5 (Strategic Plans and Policy Directorate) for Oceans Policy / Counterpiracy, J3 (Operations Directorate), and J2 (Joint Staff Intelligence Directorate), Piracy Lead Office of General Counsel Under United States Africa Command: the Strategy, Plans and Programs Directorate; the Intelligence and Knowledge Development Directorate; the Operations and Logistics Directorate, Information Operations Division; the Command, Control, Communications, and Computer Systems and Chief Info Officer Directorate; and the Outreach Directorate, Strategic Communications Division Under United States Central Command: the Maritime Liaison Office (Bahrain); and the Naval Forces Central Command’s Maritime Operational Center (Bahrain), Chief of Staff, Judge Advocate General’s Corps U.S. Naval Forces Central Command (Bahrain), and Naval Criminal Investigative Service (Bahrain) United States Special Operations Command Under the Department of the Navy: the Naval Criminal Investigative Service and the Office of Naval Intelligence United States Coast Guard’s offices of Assessment, Integration, and Risk Management; Counterterrorism and Defense Operations; International Affairs and Foreign Policy Advisor; Public Affairs; Vessel Activities; Prevention Policy; Maritime and International Law; Policy Integration; Law Enforcement; Operations Law; and the Patrol Forces Southwest Asia (Bahrain) National Security Division Criminal Division’s Office of Overseas Prosecutorial Development Assistance Training and Narcotic and Dangerous Drug Section Federal Bureau of Investigation’s Criminal Investigative Division, Violent Crimes Section and Organized Crime Section United States Attorneys’ Office Office of the Secretary of State Bureau of African Affairs’ Office of East African Affairs and Office of Bureau of Political-Military Affairs’ Office of Plans, Policy and Analysis and Office of International Security Operations Office of the Legal Adviser for Law Enforcement and Intelligence; Oceans, International Environmental and Scientific Affairs; Attorney- Adviser (specializing in law of the seas); and Attorney-Adviser (specializing in United Nations issues) Bureau of International Narcotics and Law Enforcement Affairs’ Office of Anti-Crime Programs, Money Laundering/Terrorism Financing Unit Bureau of Democracy, Human Rights, and Labor’s Office of Country Reports and Asylum Affairs and Office of Africa and Eurasia Bureau of Oceans and International Environmental and Scientific Affairs’ Office of Ocean and Polar Affairs Bureau of Economic, Energy and Business Affairs’ Office of Transportation Policy and Office of Terrorism Finance and Economics Sanctions Policy Foreign Policy Advisor from the Department of State to the U.S. Naval Forces Central Command (Manama, Bahrain), and the Permanent Representative to the International Maritime Organization from the Department of State / U.S. Embassy–London U.S. We also interviewed and, where appropriate, obtained documentation from the following: International Maritime Organization (London, U.K.) European Union Naval Forces (Northwood, U.K.), Maritime Security Centre–Horn Of Africa Industry Liaison, Chief of Staff, J4 Movements and Transport, and Industry Liaison Combined Maritime Forces (Manama, Bahrain), Coalition Forces’ Chief Air Coordination Element and Shared Awareness and Deconfliction Meeting North Atlantic Treaty Organization (Northwood, U.K.), Maritime Air Operations, N2 Intelligence Division, N3 Operations Division, and North Atlantic Treaty Organization Shipping Centre United Kingdom Foreign & Commonwealth Office, Ministry of Defense, and Department for Transport APL Maritime; Baltic and International Maritime Council (BIMCO); Chamber of Shipping of America; International Association of Dry Cargo Shipowners (INTERCARGO); International Association of Independent Tanker Owners (INTERTANKO); International Chamber of Shipping; International Group of P&I Clubs; International Maritime Bureau; International Transportation Workers Federation (ITF); Lloyd’s Market Association; Maersk Line Limited; National Academy of Sciences, Transportation Research Board, Marine Board; Society of International Gas Tanker and Terminal Operators Limited (SIGTTO); and the World Shipping Council. Former Commander of the Combined Maritime Forces (Combined Task Force 151), former United States Navy Judge Advocate General, Royal United Services Institute for Defence and Security Studies, International Institute for Strategic Studies, and the Royal Institute of International Affairs (Chatham House). We conducted this performance audit from October 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In December 2008, the U.S. National Security Council (NSC) published its Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan), which laid out 14 tasks to implement three lines of action to prevent, disrupt, and prosecute acts of Somali piracy. We assessed the extent to which U.S. government agencies involved in countering piracy have made progress implementing the Action Plan. In addition to the information provided earlier in this report, this appendix contains further details on the steps that those agencies have taken—or have yet to take— to implement various tasks called for under each of the plan’s three lines of action: (1) prevent pirate attacks by reducing the vulnerability of the maritime domain to piracy; (2) disrupt acts of piracy consistent with international law and the rights and responsibilities of coastal and flag states; and (3) facilitate the prosecution of suspected pirates by flag, victim, and coastal states, and, in appropriate cases, the United States to ensure that those who commit acts of piracy are held accountable for their actions. We based our assessment on reviews of agency plans, status reports, and interviews with U.S. government, international, and industry officials involved in counterpiracy efforts. The scope and methodology used in our review are described in further detail in appendix I. In concert with the United Nations and international partners, the U.S. government has made substantial progress in helping to establish and maintain a Contact Group of countries willing and able to help combat piracy off the Horn of Africa. The Action Plan calls for the immediate establishment of a Contact Group to combat piracy off the Horn of Africa, which would meet as necessary to develop and coordinate international policy initiatives, share information, provide resources for building regional capacity to counter piracy, and advocate for other mechanisms to repress piracy. In January 2009, the Contact Group on Piracy off the Coast of Somalia (Contact Group) was formed in response to United Nations Security Council Resolution 1851, and, as of June 2010, it had 49 member nations as well as international organization partners. The Contact Group established a multidonor trust fund to help offset the cost of prosecuting suspected pirates, and in April 2010, members approved $2.1 million for programs in the Seychelles and Somalia. The Department of State (State) orchestrates U.S. participation in the Contact Group, coordinating with officials from the Departments of Defense, Justice, Homeland Security, Transportation, and the Treasury. In addition, the Coast Guard and the Maritime Administration cochair the working group on industry self-protection, which facilitated development and adoption of best management practices for self-protection, in coordination with industry and the International Maritime Organization. Military, industry, and international officials credit these self-protection measures, in part, for the reduction in successful pirate attacks in the Gulf of Aden from 2008 to 2009. According to agency officials, the Department of Defense (Defense) and State have participated in various other working groups, including military coordination and judicial efforts. The U.S. government has made substantial progress on strengthening the use of the Maritime Security Patrol Area in collaboration with its international partners, though there are limits to the reach of government influence on commercial vessels. The Action Plan calls for the United States to strengthen the use of the Maritime Security Patrol Area—the area patrolled by coalition Combined Maritime Forces and other navies—by encouraging other countries to assign naval forces and assets to the area, coordinating and sharing information with the other navies, and urging members of the shipping industry to use the Maritime Security Patrol Area. State has encouraged multinational military coordination through bilateral channels and the Contact Group. The U.S. Navy has contributed to both to the Combined Maritime Forces and North Atlantic Treaty Organization patrols. In addition, the United States contributes to Shared Awareness and Deconfliction meetings, established to share information with and coordinate the counterpiracy patrols of coalition forces and independent countries. International officials also told us that Combined Maritime Forces, North Atlantic Treaty Organization, and European Union forces are coordinating surveillance and patrol of the Internationally Recommended Transit Corridor, the recommended route within the Maritime Security Patrol Area for commercial vessels transiting the Gulf of Aden. Defense, Coast Guard, the Maritime Administration, and the Maritime Liaison Office have used a variety of methods to encourage commercial vessels to use the Maritime Security Patrol Area and coordinate with naval patrols, such as publishing advisories, maintaining informational Web sites, and sponsoring information-sharing meetings. The Coast Guard requires that U.S.-flagged vessels register their transit plans through the Horn of Africa region with the Maritime Security Centre–Horn of Africa and notify the United Kingdom Maritime Trade Operations office in Dubai, which both monitor the transit of vessels in the region. However, U.S.- flagged vessels comprise a small proportion of the ships that transit the high-risk waters off the Horn of Africa, and , and the Coast Guard regulations mandating self-protection measures only apply to U.S.-flagged vessels. While the U.S. government encourages commercial vessels from other flag states to take advantage of the monitoring provided by navies patrolling the Maritime Security Patrol Area, Defense, Maritime Administration, shipping industry, and international officials estimate that approximately 20 to 25 percent of the shipping traffic in the region does not register its transit with patrolling forces. These officials also told us that, as pirates have expanded their area of operations into the Indian Ocean, coalition forces faced increased challenges in disrupting attacks given the unfeasibility of establishing secured transit corridors in this area similar to that used in the Gulf of Aden. The Coast Guard has achieved substantial progress in ensuring that ship security plans for U.S.-flagged vessels have been updated with piracy annexes, and the United States is encouraging other countries to implement similar measures. The Action Plan calls for the United States to urge other nations to update their ship security plans and to encourage vessels in the Gulf of Aden to take specific protective measures. In May 2009, the Coast Guard promulgated the second revision of Maritime Security Directive 104-6, which requires that all U.S.-flagged vessels transiting high-risk areas have an approved security plan to prevent and defend against pirate attacks. Furthermore, the Coast Guard and the Maritime Administration have taken steps to implement this task by issuing guidance to support industry efforts to prevent attacks. For example, the Coast Guard’s Port Security Advisories provide information on using armed security teams to protect vessels transiting high-risk waters. As of July 2010, the Coast Guard had approved the additional security measures submitted by each of the 211 U.S.-flagged vessels identified as traveling through high-risk waters, 108 of which travel through the Horn of Africa region. The Coast Guard ensures those U.S.- flagged vessels transiting high-risk waters have an updated plan by monitoring the movement of U.S.-flagged vessels, checking for approved plans, and investigating compliance when vessels are at certain ports. However, U.S.-flagged vessels comprise only a small proportion of the ships that transit the area, and according to officials the influence of the U.S. government on international ships is limited. To encourage international implementation of self-protection measures by commercial vessels, the United States has signed and promoted the nonbinding New York Declaration. According to the declaration, the signatory countries will ensure, when carrying out their obligations under the International Ship and Port Facility Security (ISPS) Code, that vessels on their registry have adopted and documented appropriate self-protection measures in their ship security plans. These plans specify how each vessel will employ the applicable self-protection measures. While officials acknowledge that best management practices do not provide guaranteed protection against a hijacking, officials at the International Maritime Organization and the Maritime Security Centre–Horn of Africa, established by the European Union Naval Force, estimate that the majority of ships hijacked in the Gulf of Aden were not following one of the easiest and least costly of self-protection measures, registering their voyage through high-risk waters with the centre. Although U.S., international, and industry officials told us that no data are available on the extent to which ships transiting high-risk waters are following best practices, U.S., international military, and industry officials estimate that approximately 70 to 80 percent of ships are using best management practices to deter piracy. However, the United States and its international partners still face challenges urging compliance with these practices among the remaining 20 to 30 percent of vessels. In collaboration with the Contact Group, U.S. departments and agencies involved in strategic communication efforts have made some progress in implementing actions called for in the Action Plan. The Action Plan calls for the U.S. government to lead and support a global public information and diplomatic campaign to highlight the international cooperation, coordination, and integration undertaken to repress piracy off the Horn of Africa while emphasizing the destructive effects of piracy on trade, human and maritime security, and the rule of law. Agency officials have stated that the lack of a U.S. presence in Somalia presents challenges to efforts to communicate directly with the Somali population to discourage piracy and makes it difficult to measure the effectiveness of strategic communication efforts. High-level U.S. government officials have warned of the threat of piracy in public statements, and the Coast Guard and the Maritime Administration have actively shared information with members of the shipping industry to encourage self-protection from attack. For example, in April 2009 the Secretary of State outlined four steps State was taking in the aftermath of the hijacking of the MV Maersk Alabama, primarily diplomatic engagement with international partners and Somali government officials, and work with the shipping and insurance industries. Further, the Coast Guard held a series of roundtable discussions with the shipping industry to address concerns about ransom payments following the issuance of an April 2010 executive order that prohibits persons under U.S. jurisdiction from making payments to persons designated under the Order, and State and the Department of the Treasury (Treasury) officials also told us they established guidance for and communicated with the shipping industry after the executive order was issued. In addition, according to officials, Defense and State lead interagency meetings held, in part, to gain U.S. consensus on piracy-related strategic communication issues prior to meetings with international partners. State officials also reported contributing to interagency strategic communication efforts of the Contact Group and have created a publicly available maritime security Web page, which includes information on piracy. The Department of Defense has developed a strategic communication plan, but it is a classified document for internal use. State officials told us they have drafted a plan to coordinate interagency strategic communication on counterpiracy efforts, including outreach to domestic and foreign audiences to inform them about U.S. and international efforts to combat piracy off the coast of Somalia, but at the time of this report, the draft was still undergoing review by interagency partners and had not been finalized. The United States has not worked to create a Counter-Piracy Coordination Center as called for in the Action Plan, but a progress assessment toward this task was considered not applicable given changing circumstances and the status of other ongoing counterpiracy efforts since the time of the plan’s publication. The Action Plan calls for the creation of a Counter- Piracy Coordination Center to establish a single, centralized service to receive reports of piracy and suspicious vessels, alert maritime interests, gather and analyze information regarding piracy off the Horn of Africa, provide a secure common operating picture for stakeholder governments and the shipping industry, and as appropriate, coordinate the dispatch of available response assets. However, according to Defense officials, creating such a center would duplicate existing capabilities provided by international partners. Subsequent to the publication of the Action Plan, Defense officials determined that existing efforts were in place to meet the goals outlined for a coordination center. Three organizations are currently involved in carrying out the tasks outlined for a single coordination center, each of which covers the functions of a Counter-Piracy Coordination Center. The Maritime Security Centre–Horn of Africa is a coordination center for transiting ships to voluntarily record their ships’ movements and to receive updated threat information. It also coordinates available response assets to provide support and protection to mariners. The United Kingdom’s Maritime Trade Operations office in Dubai serves as the first point of contact for reporting an attack. The Maritime Liaison Office in Bahrain serves as the link between the commercial maritime community and U.S. and coalition military forces. Other mechanisms exist to coordinate stakeholder governments, such as the Contact Group and its associated working groups, and to coordinate military patrols, such as the Shared Awareness and Deconfliction meetings. The United States has made progress toward seizing and destroying pirate vessels and equipment but has had limited progress delivering suspected pirates for prosecution. The Action Plan calls for the seizing and destroying of vessels outfitted for piracy and related equipment, and states the U.S. government may conduct and urge others to conduct counterpiracy operations in international waters around Somalia. According to data from the U.S.-led Combined Maritime Forces, coalition and other international partners destroyed or confiscated nearly 100 pirate vessels and confiscated more than 380 weapons, including small arms and rocket propelled grenades between August 2008 and June 2010. Coalition forces also report that international partners confiscated approximately 140 items of pirate paraphernalia, including automatic weapons, grappling hooks, ladders, and global positioning system devices in that same time period. According to military officials, interdicting forces determine a vessel to be potentially used for piracy upon sight, given the presence of certain gear and weaponry and the absence of typical fishing gear. Military officials also told us that, once piracy equipment is seized and destroyed, U.S. forces follow international protocols and, in the event suspects are not detained, release the vessel and those onboard with sufficient fuel and provisions to reach shore. According to international military officials, European Union and North Atlantic Treaty Organization forces also are monitoring pirate bases on shore from warships, and then seizing and destroying pirate skiffs and equipment as they leave bases. However, military and international officials told us that the seizing of pirate paraphernalia provides only a temporary obstacle to pirate operations. U.S. efforts to deliver suspected pirates to states for prosecution are hampered by a lack of states that are willing and able to prosecute. The Action Plan states the U.S. government will deliver suspected pirates to states that are willing and able to prosecute in those cases where pirate vessels are seized or destroyed. As of June 2010, international forces had encountered more than 1,100 suspected Somali pirates since August 2008 but had delivered only approximately 40 percent to states for prosecution. According to a report issued by the Department of Defense in May 2010, U.S. military forces have transferred 24 suspected pirates for prosecution to Kenya, the only country with which the United States had an arrangement to accept pirate transfers at the time. According to State and Department of Justice (Justice) officials, Kenya is only willing to accept cases with strong evidence, such as cases in which suspects are caught in the act of committing piracy. According to Combined Maritime Forces officials, when suspected pirates are interdicted at sea and are not engaged in an act of piracy, but are in possession of pirate equipment, interdicting forces typically will detain the suspected pirates, confiscate their equipment, and then release the suspects. Additionally, officials stated that because of evidence standards and the limited options for prosecution, interdicting forces are left with little choice but to catch and release the suspected pirates. As of June 2010, approximately 57 percent of the suspects that international forces encountered were caught and released. Furthermore, military officials told us there have been cases of suspects being encountered multiple times at sea, so the practice of catching and releasing suspects could allow multiple attempts at piracy. Although Defense officials we spoke with had varied opinions on whether repeat offenders were a credible issue, since biometric data—such as fingerprints—are not systematically gathered to track such cases, U.S. and international forces cannot determine whether they are finding repeat offenders. Although, as noted in the Action Plan, piracy is a universal crime that any state could potentially prosecute, most states, including the United States, in practice will consider prosecuting suspected pirates in appropriate cases when it is in their national interest to do so. However, according to State officials, some countries lack sufficient domestic law to support prosecution of suspected pirates. Others may have the domestic legal frameworks, but lack the resources or political will to take action. State officials also told us that logistical difficulties exist in prosecuting piracy cases such as evidence collection and preservation at sea, bringing in merchant mariners or naval personnel to provide testimony and difficulty proving intent in cases where suspects were not caught in the act. Finally, some countries that might otherwise provide a venue for prosecution may also have concerns that acquitted suspects or convicted pirates who are released after serving a prison sentence may seek asylum. Officials from State told us the U.S. government has prosecuted cases against every suspected pirate captured who attempted an attack on a U.S. vessel. Currently, a total of 12 suspects from attacks on the MV Maersk Alabama (April 2009), USS Nicholas (March 2010) and USS Ashland (April 2010) are being tried in the United States. The U.S. government will approach other affected states for prosecution in cases interdicted by U.S. forces where there is no interest for the U.S. government to prosecute. According to officials at State, preference for prosecution is given to the flag state of a vessel. State officials also said they are encouraging regional countries to prosecute. Since the Action Plan was issued, the U.S. military and Coast Guard have made substantial progress in providing an interdiction-capable presence by providing resources to a counterpiracy task force under the U.S.-led Combined Maritime Forces, and the U.S. Navy has contributed to North Atlantic Treaty Organization counterpiracy operations. According to the Action Plan, the U.S. Navy and Coast Guard forces operating in the region provide persistent interdiction through presence, can conduct maritime counterpiracy operations, and shall coordinate counterpiracy activities with other forces to prevent, respond to, and disrupt pirate attacks. Since the Combined Maritime Forces’ counterpiracy task force was established in January 2009, the U.S. Navy has provided patrol ships, aircraft, surveillance assets, medical response units, as well as leadership for the international naval coalition conducting counter piracy operations in the Gulf of Aden and Indian Ocean. According to Defense officials, from June 2009 to June 2010, the U.S. Navy had an average of four to five ships present daily in the Horn of Africa, with two or three of those ships having embarked air assets. Defense officials told us as many as eight U.S. Navy ships could be present on any given day, with Navy ships supporting Combined Maritime Forces and North Atlantic Treaty Organization counterpiracy operations, and other maritime coalition and U.S. national efforts. For example, U.S. Marine Corps aviation units have provided support to counterpiracy operations during transits of the area and, according to agency officials, the Coast Guard has assigned deployable specialized forces and a cutter to the combatant commander to support counterpiracy operations. In addition, the Naval Criminal Investigative Service also supports maritime counterpiracy operations by providing special agents afloat to assist boarding teams and lead immediate investigations into piracy incidents on the high seas. U.S., international, and industry officials credit the reduction in the rate of successful pirate attacks from approximately 40 percent in 2008 to 22 percent in 2009, in part, to international patrols in the Gulf of Aden. The U.S. military also initiated and contributes to tactical military coordination and information sharing with international partners through Shared Awareness and Deconfliction meetings that optimize patrol coverage of the transit corridor in the Gulf of Aden and aid with coordination of coalition and independently deployed counterpiracy forces. However, coalition officials acknowledge U.S. and international forces face challenges in interdicting pirate incidents as pirates have adapted their tactics and expanded their area of activity to the much larger and harder- to-patrol Indian Ocean. Pirates have attacked several vessels more than 1,000 nautical miles from Somalia and now threaten an area of nearly 2 million square nautical miles. Analytic estimates from Defense officials show that full coverage of the area affected by piracy would require more than 1,000 ships equipped with helicopters—a level of support Defense officials say is beyond the means of the world’s navies to provide. With current resources, Combined Maritime Forces officials estimate 25 to 30 international ships conduct counterpiracy patrols in the Horn of Africa at any given time. In addition, military officials noted it is hard to predict how long countries will sustain counterpiracy investments, since countries participate in Combined Maritime Forces patrols at will. The Action Plan also states that effective and prompt consequence-delivery mechanisms are critical to the success of interdiction efforts. However, challenges related to judicial capacity and securing prosecution venues may complicate interdiction efforts. The U.S. government has discussed shiprider programs with several countries but no counterpiracy shiprider programs have been finalized for this region. The Action Plan calls for supporting and participating in the development of shiprider programs and other bilateral and regional counterpiracy agreements and arrangements. Shiprider arrangements would allow foreign law enforcement officials to operate from U.S. naval vessels and facilitate the prosecution of suspected pirates. For example, shipriders from the country that would prosecute suspected pirates would be able to arrest the suspects and collect evidence directly, thereby facilitating the prosecution of the suspected pirates. According to officials at State, they determined, in discussion with Kenyan officals, that a shiprider program would not facilitate prosecution of suspected pirates in Kenya because Kenyan law requires suspects to be presented before a magistrate within 24 hours of being taken into custody by a Kenyan official, including a shiprider. This requirement would be challenging to meet when suspected pirates are interdicted far out in the Indian Ocean. A shiprider provision was therefore not included in the prosecution arrangement facilitating transfer of suspects between the United States and Kenya for prosecution. According to officials at State, the Seychelles has a similar law and therefore a shiprider provision was not included in its arrangement with the United States. While State and Justice officials told us there are ongoing discussions regarding arrangements with other countries, such as Mauritius and the Philippines, the U.S. government faces challenges in finding willing partners for such programs. Officials acknowledged that shiprider programs may not be as beneficial for counterpiracy efforts as the authors of the Action Plan intended. The U.S. government also has been involved in the International Maritime Organization’s effort to conclude a regional arrangement, called the Djibouti Code of Conduct. This arrangement includes sections that address topics similar to those addressed in the Action Plan. For example, the code contains provisions related to information sharing regarding pirate activity, reviews of national legislation related to piracy, and the provision of assistance between the signatories. The code also includes a section addressing the possibility of using shipriders. Coast Guard and State officials were involved in the development of the code and have also expressed support for implementing elements of the code. The U.S. government has not taken any action toward disrupting and dismantling pirate bases ashore, for a number of reasons including that the President has not authorized this action, the United States has other interests in the region that compete for resources, and long-standing concerns about security hinder the presence of U.S. military and government officials in Somalia. The Action Plan states that piracy at sea can be abated only if pirate bases ashore are disrupted or dismantled. Additionally, the plan states that the appropriate authority to disrupt and dismantle pirate bases ashore has been obtained from the United Nations Security Council and Somali authorities, and states that the United States will work with concerned governments and international organizations to disrupt and dismantle pirate bases to the fullest extent permitted by national law. However, as of April 2010, such action had not been authorized by the President. In addition, Somalia has lacked a functioning central government since 1991. Further, the United States closed its embassy in Mogadishu in 1991, and there is currently no official U.S. military or civilian presence in that country. While the international community, including the United States, continues to provide humanitarian and development assistance to Somalia, challenges have limited efforts to establishing peace, security, stability, and an effective and functioning government. According to officials at State and Defense, U.S. agencies allow travel to Somalia; however, general practice has severely limited the U.S. presence in Somalia since 1994. Furthermore, State officials told us that there has been no recent travel to Somalia other than a short trip by a senior official made in February 2008. Defense and State officials said that the United States has a number of other higher priority interests in Somalia and in the region, which compete for military and civilian resources and that may ultimately affect counterpiracy decisions. While Treasury, State, and Justice have each taken steps to achieve some progress toward disrupting pirate revenue, challenges inhibit further implementation of this task. The Action Plan states that the U.S. government will coordinate with all stakeholders to deprive pirates and their supporters of any illicit revenue and the fruits of their crime, advocating the development of national capabilities to gather, assess and share financial intelligence on pirate financial operations, with the goal of tracing payments to and apprehending the leaders of pirate organizations and their enablers. Treasury served as the lead agency for implementing an executive order signed by the President in April 2010 that blocks all property or interests in property within U.S. jurisdiction of any persons that are listed in the order and allows for designation of other persons that threaten the peace, security, or stability of Somalia, including those who support or engage in acts of piracy off the coast of Somalia. However, Treasury officials told us the order applies only to assets that pass through U.S. financial institutions or come into the possession or control of persons in the United States or U.S. citizens or permanent residents, which limits the potential effect of the executive order on piracy revenue. As a result, it is not clear the extent to which designating pirates in the executive order will achieve the goal of disrupting pirate revenue. While officials told us the U.S. government has reserved the right to take enforcement action against private companies for paying ransoms to individuals designated in the executive order, only two pirates have been designated thus far. Representatives of the shipping industry have stated that ship owners have no viable option for rescuing crews being held hostage other than to pay ransoms, and they fear that a failure to pay ransoms could escalate pirates’ violence against crew members. State and Treasury officials told us they have communicated to shipping industry representatives that Treasury and Justice have discretion to decide whether or not to take enforcement action for any violation of the order, and that a decision to take enforcement actions will depend on the facts of each case. Treasury officials told us their efforts to disrupt pirate revenue also have been limited by the lack of sufficient information on pirate networks in Somalia and on the flows of pirate finances, including ransom payments. According to officials at State, the U.S. intelligence community has the strongest understanding of pirate financing, but no U.S. agencies have dedicated resources toward the issue. Federal Bureau of Investigation (FBI) and State officials told us that information related to pirate organizations may be collected in the course of pursuing other U.S. interests in the area, but piracy is not among their top priorities and is unlikely to be assigned resources. As a result, according to FBI officials, the FBI Organized Crime Section is not working to build a case against pirate leaders and enablers. State officials described the need to better use intelligence to target efforts by the U.S. government and other stakeholders, but also acknowledged that the poor security situation in Somalia poses challenges for gathering the intelligence needed to disrupt pirate financing. Ultimately, officials from multiple agencies told us U.S. agencies face resource constraints in disrupting pirate financing given higher-priority concerns such as counterterrorism. In addition, the absence of a formal financial sector in Somalia is a major challenge to filling intelligence gaps. Treasury officials stated that the lack of a formal financial sector in Somalia and the pirates’ reliance on informal financial systems presents a challenge because many of the tools they normally would use to track financial activity are implemented through formalized financial systems. State has taken several actions to raise the issue of pirate financing among international partners and to address misconceptions within the shipping industry about the U.S. position on ransoms. The U.S. government has helped elevate the issue of pirate financing within the Contact Group, including releasing a paper to participants. State and Justice also have worked with partner governments and international organizations, such as Interpol and the United Nations, to develop collaborative events linking experts on pirate financing, and sponsored a workshop in Kenya with the United Nations Office on Drugs and Crime that covered money laundering and organized crime. The U.S. government has made some progress in concluding prosecution arrangements for Somali piracy cases, by securing prosecution arrangements with Kenya and the Seychelles, and is working toward similar arrangements with other countries. The Action Plan calls for the U.S. government to conclude agreements and arrangements to formalize custody and prosecution arrangements both in and outside the region. In January 2009, the U.S. government formalized an arrangement with Kenya to facilitate transfers of piracy cases from U.S. forces. The United States has transferred 24 suspected pirates to Kenya for prosecution, and Defense officials told us one conviction has been secured thus far. In July 2010, the U.S. government also concluded an arrangement with the Seychelles for transfers of piracy cases from U.S. forces. In addition, State officials said that discussions are ongoing with countries that have a regional or commercial interest in countering piracy, such as Mauritius, the Philippines, and Tanzania, and it is taking steps to conclude further arrangements. As of May 2010, according to agency officials, State had encouraged 17 countries to consider prosecution of suspected pirates. However, State officials told us that the lack of prosecution venues is a primary challenge to prosecuting pirates, which may undermine interdiction efforts. According to State and Justice officials, challenges to establishing prosecution arrangements include limited regional capacity and interest of states outside the region to prosecute suspected pirates. In addition the relatively low rate of prosecutions contributes to the perception that pirates operate with relative impunity. As of June 2010, international forces had encountered more than 1,100 suspected Somali pirates since August 2008 but had delivered only approximately 40 percent to states for prosecution. Although Kenya announced its intent to withdraw from its arrangement with the United States in April 2010, that decision was later reversed, and more than 100 piracy cases were being processed through the Kenyan criminal justice system as of June 2010. The United States has made some progress in using the United Nations Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation to exercise jurisdiction to prosecute suspected pirates, but this effort involves several challenges. The Action Plan calls for the United States to use—and encourage other countries to use— appropriate jurisdiction of flag, port, and coastal states, as well as states of the nationality of victims and perpetrators of piracy, through the prosecution of any persons having committed an act of piracy, and states that the United States will urge other states party to the convention to use it as a vehicle for the prosecution of acts violating the convention. For example, the United States has exercised jurisdiction under the convention to prosecute one pirate in the United States. U.S. officials told us that State, Justice, Defense, and the Coast Guard have been involved in efforts, through the Contact Group and the International Maritime Organization, to encourage use of the Convention to prosecute suspects. However, U.S. agency officials cited hurdles to prosecuting pirates, such as limits to affected countries’ willingness and capacity to prosecute pirates, and difficulties associated with collecting evidence in the maritime environment. The United States has taken some steps to support and encourage the use of other applicable international conventions and customary international law as they relate to prosecuting piracy. The Action Plan calls for the U.S. government to support and encourage the use of relevant and appropriate jurisdiction through the framework of applicable international conventions, in addition to the Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation, such as the 1979 Hostage Taking Convention, the 2000 Transnational Organized Crime Convention, and the 1999 Terrorist Financing Convention, and customary international law. For example, the U.S. government has exercised jurisdiction over 11 suspected pirates who attempted attacks on the USS Nicholas in March 2010 and the USS Ashland in April 2010. The Action Plan also anticipates ongoing discussion with other states on the possibility of an international court to prosecute suspected pirates, if necessary. However, the U.S. government does not support creation of a dedicated piracy court because of the amount of time, resources, and infrastructure that would be required. State officials said that the U.S. government is interested in solutions for challenges to prosecution, including the possibility of a hybrid court combining a piracy chamber within a national system. However, they said despite interest expressed by members of the Contact Group and other nations, no countries have offered their prosecutors or territories for use in establishing a dedicated international court. The Departments of Justice and State have achieved some progress in providing assistance to several regional countries, and the United States is contributing to international efforts to develop regional judicial capacity. The Action Plan calls for the United States to work with interested parties to identify the nature and scope of international assistance needed to enhance the capacities of regional states in connection with the arrest, detention, prosecution, and fair trial of persons accused of involvement in piracy, and to pursue bilateral programs to provide judicial capacity- building efforts. State has created an assessment tool to identify gaps in regional states’ maritime capabilities including judicial capacity. The U.S. government provides support to regional partners for building judicial capacity. For example, the resident legal advisor at the U.S. Embassy in Nairobi has provided assistance to Kenya, Tanzania, and the Seychelles. This advisor, a position within Justice’s Office of Overseas Prosecutorial Development, Assistance and Training but supported by State, told us he provided assistance in developing piracy cases, and helped develop guidance for U.S. forces on evidence collection and transferring piracy cases to Kenya. Naval Criminal Investigative Service special agents have testified in Kenyan courts, and provided counter-piracy training and operational support to officials in the Seychelles. In addition, the U.S. government, in conjunction with the United Nations Office on Drugs and Crime, has sponsored conferences focused on piracy for law enforcement and judges from countries in the Horn of Africa region. Further, the United States has contributed $250,000 to the United Nations counterpiracy effort for regional capacity-building. In April 2010, the Contact Group board that administers a trust fund for prosecution issues, which includes the United States, approved $2.1 million for five projects primarily to support the prosecution of suspected pirates in Somalia and the Seychelles. However, Justice and State officials told us that regional states continue to have a limited capacity to prosecute suspected pirates and incarcerate convicted pirates. Although State officials said that they were attempting to include a funding request for future operations, in the current budget cycle, counterpiracy operations at State have no dedicated budget. Contact Group on Piracy off the Coast of Somalia In January 2009, the Contact Group on Piracy off the Coast of Somalia (Contact Group) was formed in response to United Nations Security Council Resolution 1851 to facilitate discussion and coordination of actions among countries and organizations working to suppress piracy off the coast of Somalia. The participating countries established four working groups in which all Contact Group parties may participate. Working Group 1 addresses activities related to military and operational coordination and information sharing and the establishment of the regional coordination center, and is chaired by the United Kingdom with the support of the International Maritime Organization. Denmark chairs Working Group 2, which addresses judicial aspects of piracy with the support of United Nations Office on Drugs and Crime. The United States chairs Working Group 3 to strengthen shipping self-awareness and other capabilities, with the support of the International Maritime Organization. Egypt chairs Working Group 4 which focuses on improving diplomatic and public- information efforts on all aspects of piracy. As of June 2010, 49 countries, 7 international organizations, and 3 industry observers participate in the Contact Group. First open for signature in May 2009, the New York Declaration is a commitment by countries to promulgate the internationally recognized best management practices for self-protection to vessels on their registry and ensure that vessels on their registry have adopted and documented appropriate self-protection measures. As of July 2010, 10 countries had signed the declaration. The Djibouti Code of Conduct recognizes the problem of piracy and armed robbery against ships in the Horn of Africa region. Signatories declare their intention to cooperate to the fullest extent possible, consistent with their available resources and related priorities, their respective national laws and regulations, and international law in the repression of piracy and armed robbery against ships. Among other things, under the code, participants should set up national focal points to facilitate coordinated, timely, and effective flow of information about piracy and armed robbery against ships. Additionally, according to the code, each participant intends to review its national legislation to ensure it has laws in place to criminalize piracy and armed robbery against ships and adequate provisions for the exercise of jurisdiction, conduct of investigations, and prosecution of alleged offenders. The code is open for signature by the 21 countries in the region and, as of March 2010, 13 of the 21 countries had signed. Combined Maritime Forces and Combined Task Force 151 Under the leadership of the commander of the U.S. Naval Forces Central Command and U.S. 5th Fleet, the Combined Maritime Forces is a 25-nation coalition that is focused on countering terrorism, preventing piracy, reducing illegal trafficking of people and drugs, and promoting safety of the maritime environment. Established in 2002, the Combined Maritime Forces patrol more than 2.5 million square miles of international waters to conduct both integrated and coordinated operations. Additionally, the Combined Maritime Forces conducts maritime security operations in the Arabian Gulf, Red Sea, Gulf of Oman, and parts of the Indian Ocean. This expanse includes three critical points in high-risk waters at the Strait of Hormuz, the Suez Canal, and the Strait of Bab al Mandeb at the southern tip of Yemen. In January 2009, the Combined Maritime Forces established Combined Task Force 151 with the sole mission of conducting counterpiracy operations in the Gulf of Aden and the waters off the Somali coast in the Indian Ocean. This is a multinational naval task force made up of countries willing and able to participate in counterpiracy operations. So far, 11 countries have contributed forces to Combined Task Force 151 and several others that have agreed to send ships or aircraft or both to participate in counterpiracy operations. North Atlantic Treaty Organization—Operation Ocean Shield Operation Ocean Shield is the North Atlantic Treaty Organization’s contribution to international efforts to combat piracy off the Horn of Africa. This operation builds on the North Atlantic Treaty Organization’s previous counterpiracy operations which began in late 2008 when the North Atlantic Treaty Organization began providing escorts to United Nations World Food Programme vessels transiting the high-risk waters off the Horn of Africa. The North Atlantic Council approved Operation Ocean Shield in August 2009. This operation focuses on at-sea counterpiracy operations, support to the maritime community to take actions to reduce incidents of piracy, as well as regional-state counterpiracy capacity building. This operation is designed to complement the efforts of existing international organizations and forces operating in the area. This operation is being implemented by the Standing North Atlantic Treaty Organization Maritime Group 2, made up of vessels from eight different member countries that routinely contribute to the group and other countries that occasionally contribute. European Union Naval Force Somalia—Operation Atalanta The European Union is conducting Operation Atalanta to help deter, prevent, and repress acts of piracy and armed robbery off the coast of Somalia. This operation began in late 2008 following the adoption of Resolutions 1814, 1816, 1838, and 1846 by the United Nations Security Council. The operation’s objectives are to protect World Food Programme vessels, humanitarian aid, and African Union Military Mission in Somalia shipping; help deter, prevent, and repress acts of piracy and armed robbery; protect vulnerable shipping; and monitor fishing activities off the coast of Somalia. This operation is being implemented by 14 countries with operational support provided by a team at the Northwood Operation Headquarters. Operation Atalanta has been extended by the European Council until December 2012. Independent deployers are countries that are not part of the coalition forces. These countries deploy naval forces to the region under national auspices to escort their ships through high-risk waters and to monitor counterpiracy operations, and may coordinate with coalition patrols. Although the Action Plan considers piracy to be a universal crime that any country can prosecute, in practice, most countries, including the United States, will consider prosecuting suspected pirates in appropriate cases when it is in their national interest to do so. A single piratical attack often affects the interests of numerous countries, including the country in which the vessel is flagged, the various countries of nationality of the seafarers taken hostage, regional coastal countries, the country of the vessel or cargo owner, and transshipment and destination countries. Various organizations representing interests of the shipping industry have been involved in efforts to prevent or respond to piracy off the Horn of Africa. For example, the 12 shipping industry organizations actively involved in the development of the “Best Management Practices to Deter Piracy in the Gulf of Aden and off the Coast of Somalia” represent the interests of ship owners, seafarers, marine insurance companies, and others, and included: the International Association of Independent Tanker Owners, International Chamber of Shipping, Oil Companies International Marine Forum, Baltic and International Maritime Council, Society of International Gas Tanker and Terminal Operators, International Association of Dry Cargo Shipowners, International Group of Protection and Indemnity Clubs, Cruise Lines International Association, International Union of Marine Insurers, Joint War Committee & Joint Hull Committee, International Maritime Bureau, and International Transport Workers Federation. Pirates have expanded their area of operations with an increasing number of attacks occurring in the Indian Ocean, an area much larger than the Gulf of Aden. Defense officials report that pirates now threaten an area of nearly 2 million square nautical miles in the Somali Basin and Gulf of Aden. Figure 11 shows the number and location of pirate attacks off the Horn of Africa reported to the International Maritime Bureau in 2007, 2008, 2009, and the first half of 2010. In addition to the contacts above, Dawn Hoff, Assistant Director; Patricia Lentini, Assistant Director; Elizabeth Curda; Susan Ditto; Nicole Harms; Barbara Hills; Brandon L. Hunt; Farhanaz Kermalli; Eileen Larence; Tom Melito; Tobin McMurdie; John Mingus; Susan Offutt; Terry Richardson; Mike Rohrback; Leslie Sarapu; Amie Steele; Gabriele Tonsil; Suzanne Wren; and Loren Yager made key contributions to this report. Anti-Money Laundering: Better Communication Could Enhance the Support FinCEN Provides to Law Enforcement. GAO-10-622T. Washington, D.C.: April 28, 2010. Coast Guard: Deployable Operations Group Achieving Organizational Benefits, but Challenges Remain. GAO-10-433R. Washington, D.C.: April 7, 2010. Anti-Money Laundering: Improved Communication Could Enhance the Support FinCEN Provides to Law Enforcement. GAO-10-141. Washington, D.C.: December 14, 2009. Interagency Collaboration: Key Issues for Congressional Oversight of National Security Strategies, Organizations, Workforce, and Information Sharing. GAO-09-904SP. Washington, D.C.: September 25, 2009. Combating Illicit Financing: Treasury’s Office of Terrorism and Financial Intelligence Could Manage More Effectively to Achieve Its Mission. GAO-09-794. Washington, D.C.: September 24, 2009. Maritime Security: Vessel Tracking Systems Provide Key Information, but the Need for Duplicate Data Should Be Reviewed. GAO-09-337. Washington, D.C.: March 17, 2009. Maritime Security: National Strategy and Supporting Plans Were Generally Well-Developed and Are Being Implemented. GAO-08-672. Washington, D.C.: June 20, 2008. Somalia: Several Challenges Limit U.S. and International Stabilization, Humanitarian, and Development Efforts. GAO-08-351. Washington, D.C.: February 19, 2008. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: December 10, 2007. Maritime Security: Public Safety Consequences of a Terrorist Attack on a Tanker Carrying Liquefied Natural Gas Need Clarification. GAO-07-316. Washington, D.C.: February 22, 2007. Maritime Security: Information-Sharing Efforts Are Improving. GAO-06-933T. Washington, D.C.: July 10, 2006. Terrorist Financing: Agencies Can Improve Efforts to Deliver Counter- Terrorism-Financing Training and Technical Assistance Abroad. GAO-06-632T. Washington, D.C.: April 6, 2006. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Department of Transportation, Maritime Administration: Maritime Security Program. GAO-06-123R. Washington, D.C.: October 6, 2005. Maritime Security Fleet: Many Factors Determine Impact of Potential Limits on Food Aid Shipments. GAO-04-1065. Washington, D.C.: September 13, 2004. Combating Terrorism: Federal Agencies Face Continuing Challenges in Addressing Terrorist Financing and Money Laundering. GAO-04-501T. Washington, D.C.: March 4, 2004. Terrorist Financing: U.S. Agencies Should Systematically Assess Terrorists’ Use of Alternative Financing Mechanisms. GAO-04-163. Washington, D.C.: November 14, 2003. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003.
|
Somali pirates operating off the Horn of Africa have attacked more than 450 ships and taken nearly 2,400 hostages since 2007. A small number of U.S.-flagged vessels and ships have been among those affected. As Somalia lacks a functioning government and is unable to repress piracy in its waters, the National Security Council (NSC) developed the interagency Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan) in December 2008 to prevent, disrupt, and prosecute piracy off the Horn of Africa in collaboration with international and industry partners. GAO was asked to evaluate the extent to which U.S. agencies (1) have implemented the plan, and any challenges they face in doing so, and (2) have collaborated with partners in counterpiracy efforts. GAO examined counterpiracy plans, activities, collaborative practices, and data, and interviewed industry and international partners and officials at U.S. agencies and the Combined Maritime Forces in Bahrain. The U.S. government has made progress in implementing its Action Plan, in collaboration with international and industry partners, but pirates have adapted their tactics and expanded their area of operations, almost doubling the number of reported attacks from 2008 to 2009, and the U.S. government has yet to evaluate the costs, benefits, or effectiveness of its efforts or update its plan accordingly. The United States has advised industry partners on self-protection measures, contributed leadership and assets to an international coalition patrolling pirate-infested waters, and concluded prosecution arrangements with Kenya and the Seychelles. Officials credit collaborative efforts with reducing the pirates' rate of success in boarding ships and hijacking vessels in 2009. However, from 2007 to 2009, the most recent year for which complete data were available, the total number of hijackings reported to the International Maritime Bureau increased, ransoms paid by the shipping industry increased sharply, and attacks spread from the heavily patrolled Gulf of Aden--the focus of the Action Plan--to the vast Indian Ocean. The Action Plan's objective is to repress piracy as effectively as possible, but the effectiveness of U.S. resources applied to counterpiracy is unclear because the interagency group responsible for monitoring the Action Plan's implementation has not tracked the cost of U.S. activities--such as operating ships and aircraft and prosecuting suspected pirates--nor systematically evaluated the relative benefits or effectiveness of the Action Plan's tasks. GAO's prior work has shown that federal agencies engaged in collaborative efforts need to evaluate their activities to identify areas for improvement. Moreover, as pirates have adapted their tactics, the Action Plan has not been revised. Without a plan that reflects new developments and assesses the costs, benefits, and effectiveness of U.S. efforts, decision makers will lack information that could be used to target limited resources to provide the greatest benefit, commensurate with U.S. interests in the region. The U.S. government has collaborated with international and industry partners to counter piracy, but it has not implemented some key practices for enhancing and sustaining collaboration among U.S. agencies. According to U.S. and international stakeholders, the U.S. government has shared information with partners for military coordination. However, agencies have made less progress on several key efforts that involve multiple agencies--such as those to address piracy through strategic communications, disrupt pirate finances, and hold pirates accountable--in part because the Action Plan does not designate which agencies should lead or carry out 13 of the 14 tasks. For instance, the Departments of Defense, Justice, State, and the Treasury all collect information on pirate finances, but none has lead responsibility for analyzing that information to build a case against pirate leaders or financiers. The NSC, the President's principal arm for coordinating national security policy among government agencies, could bolster interagency collaboration and the U.S. contribution to counterpiracy efforts by clarifying agency roles and responsibilities and encouraging the agencies to develop joint guidance to implement their efforts. GAO recommends that the NSC reassess and update its Action Plan; identify metrics; assess the costs, benefits, and effectiveness of U.S. counterpiracy activities; and clarify agency roles and responsibilities. The NSC did not comment. The Departments of Defense, Homeland Security, Justice, State, Transportation, and the Treasury provided comments to clarify facts in the report.
|
The National Energy Policy report was the product of a short-term, labor- intensive process that involved the efforts of several hundred federal employees governmentwide. In the 3½ months between NEPDG’s inception and its presentation of the final report, the Principals and Support Group controlled most facets of the report’s development, including setting meeting schedules and agendas, controlling the workflow, distributing work assignments, rewriting chapters, approving recommendations, and securing the report’s contents from premature disclosure. Senior agency officials served on a select interagency Working Group, while the majority of staff working on the NEPDG effort played a tributary role, (1) helping their agency fulfill its NEPDG-related obligations, (2) providing NEPDG with analytical information, and (3) responding to the Support Group’s subsequent requests for information, review, or comment. In developing the National Energy Policy report, the NEPDG Principals, Support Group, and participating agency staff also met with, solicited input from, or received information and advice from nonfederal energy stakeholders, primarily petroleum, coal, nuclear, natural gas, electricity industry representatives and lobbyists. To a more limited degree, they also received information from academic experts, policy organizations, environmental advocacy groups, and private citizens. NEPDG met and conducted its work in two distinct phases: the first phase culminated in a March 19, 2001, briefing on challenges relating to energy supply and the resulting economic impact; the second phase ended with a May 16, 2001, presentation of the final report to the President. Figure 1 depicts the top-down process and its participants. In a January 29, 2001, memorandum, the President established NEPDG— comprised of the Vice President, nine cabinet-level officials, and four other senior administration officials—to gather information, deliberate, and make recommendations to the President by the end of fiscal year 2001. The President called on the Vice President to chair the group, direct its work and, as necessary, establish subordinate working groups to assist NEPDG. The President requested NEPDG to submit two reports: the first, an assessment of the difficulties experienced by the private sector in ensuring that local and regional energy needs are met; the second, a report outlining a recommended national energy policy designed to help the private sector, and as necessary and appropriate, federal, state, and local governments, to promote dependable, affordable, and environmentally sound production and distribution of energy for the future. More specifically, the memorandum mentioned four areas of concentration: (1) growing demand for energy; (2) the potential for disruptions in energy supplies or distribution; (3) the need for responsible policies to protect the environment and promote conservation; and (4) the need for modernization of the energy generation, supply, and transmission infrastructure. The 14 NEPDG members—the Vice President, 9 Cabinet-level officials, and 4 other senior administration officials—were responsible for developing the National Energy Policy report. In a series of formal meetings convened by the Vice President, the group presented briefings, received assignments and the latest drafts, and discussed agenda items and recommendations. The following list shows the NEPDG members. The Vice President, NEPDG Chair; The Secretary of State; The Secretary of the Treasury; The Secretary of the Interior; The Secretary of Agriculture; The Secretary of Commerce; The Secretary of Transportation; The Secretary of Energy; The Director of the Federal Emergency Management Agency; The Administrator of the Environmental Protection Agency; The Director of the Office of Management and Budget; The Assistant to the President and Deputy Chief of Staff for Policy; The Assistant to the President for Economic Policy; and The Deputy Assistant to the President for Intergovernmental Affairs. NEPDG formally convened 10 times between January 29, 2001, and May 16, 2001. Meetings were held on the following dates: January 29, February 9 and 16, March 12 and 19, April 3, 11, and 18, May 2 and 16, 2001. All but two of the meetings were held in the Vice President’s Ceremonial Office. According to OVP staff and other federal officials who attended these formal meetings, attendance was strictly limited to officers and employees of the federal government. These officials indicated that none of the Principals’ meetings was open to the public nor did any nonfederal participants attend. However, no party provided us with any documentary evidence to support or negate this assertion. Due to space constraints, the Principals’ meetings typically included the Vice President, the Principals and their accompanying staff, the Support Group, and members of the Vice President’s staff. For meetings that took place when the Principals could not be present, or when the Principal had yet to be appointed, another agency official would attend instead. Agency officials participating in these meetings could not recollect whether official rosters or minutes were kept at the meetings. The 10 Principals’ meetings covered a variety of topics, depending on the status of efforts on the report and concerns raised about these efforts. The Support Group developed the meeting agendas and sent them out to agencies shortly before the meetings commenced. According to the proposed meeting agendas and our discussions with agency officials, the meetings generally lasted between 1 and 2 hours, and nearly all of them included a brief update on the California energy situation. The early meetings involved more procedural discussions than the later meetings, which focused more on a discussion of specific policy recommendations. (See table 1.) A support staff of seven—six DOE employees assigned to OVP and one White House fellow—assisted NEPDG in developing the National Energy Policy. The Support Group consisted of an executive director, a deputy director, two senior professionals, a communications director, the fellow, and a staff assistant. The Support Group served as the hub of the overall NEPDG effort and coordinated its workflow. Among its many tasks, the Support Group assigned specific responsibilities and chapters to individual agencies; established and presided over an interagency Working Group; scheduled and attended NEPDG-related meetings and determined their agendas; set internal deadlines; controlled the workflow; served as a central collection and distribution point for participating agencies’ draft outlines, report chapters, comments, and recommendations; and drafted the final report. The executive director and deputy director also held meetings with various agency staff to discuss their agencies’ input to individual chapters, conduct peer review sessions, and discuss other issues. The Support Group did not generally discuss its activities with staff at the agencies. Instead the Support Group frequently used meetings as a forum to unveil new assignments, drafts, topics, and guidance for Working Group members to deliver back to their respective agencies. The Support Group staff, specifically the executive director and deputy director, provided instructions to the Working Group participants and coordinated the activities of each participating agency. Agencies transmitted their work product to other Working Group members largely through the White House. To coordinate the day-to-day work of developing the National Energy Policy report, the NEPDG executive director established an interagency Working Group, comprised of staff-level officials from each participating agency and several White House and Support Group staff. The NEPDG executive director and deputy director oversaw the Working Group’s activities, instructed participating agencies on their roles and assignments, and facilitated communication among the Working Group participants. The Working Group developed a draft outline for the energy policy report and relayed work assignments to the agencies responsible for particular areas. Available information did not allow us to determine the number of Working Group meetings held or the number of attendees at any given meeting. NEPDG members were free to assign one or more staff to the Working Group. The Working Group met frequently in February and March 2001 to review the latest outlines and drafts, report on the status of their specific assignments, represent agency views, provide comments to other agencies, and obtain further instructions. For example, the first Working Group meeting held on February 9, 2001, concentrated on the group’s approach to developing a national energy policy and the milestones for completing the process. The second meeting held on February 13, 2001, focused on determining the chapters that would be included in the final report. Subsequent meetings typically involved a review of drafts in which the lead authors would lead discussion on a chapter’s main points. Attendees would comment on the chapters or propose new or revised text for the group’s discussion. The Working Group considered various alternatives in language, tone, and recommendations for the report and then decided on a particular course of action to recommend to the Vice President. The Working Group met often in February and March 2001, generally several days before and immediately following the Principals’ meetings. Most of these meetings took place in the Vice President’s Ceremonial Office, although several had to be rescheduled elsewhere. Working Group meetings were frequently cancelled or postponed as a result of scheduling conflicts. In a sworn declaration submitted to the court in one of the lawsuits seeking NEPDG records, the NEPDG deputy director stated that all attendees at the Working Group meetings were federal employees, with one exception—a contractor, who engaged in providing technical writing and graphic design services, worked with the group and sat in on portions of no more than three of the meetings. However, attendance lists and minutes of these meetings, if kept, were not made available to us, nor were members of the Support Group allowed to discuss these meetings with us. Thus we were unable to verify any assertions about the composition of personnel at the meetings or about the general subjects discussed. The Working Group met with Support Group staff for the last time on April 3, 2001. For the remainder of April 2001, the Support Group worked alone, condensing the list of potential recommendations for NEPDG discussion and recasting the chapters to fit the recommendations. During this period, the Support Group contacted agencies primarily to verify facts or rewrite specified sections of the report. Agency officials rejoined the process after April 30, 2001, when the Support Group released the draft chapters for final comment. The development of the National Energy Policy report involved hundreds of staff from nine federal agencies and several White House offices. Agencies had considerable latitude in determining how to staff their NEPDG assignments. Most agencies developed a multilevel, top-down process coordinated by the agency’s lead NEPDG contact or Working Group member. Generally, the NEPDG Support Group forwarded specific writing assignments, information requests, meeting times and agendas to the agency contacts, who then disseminated the information to a coordination team. The coordination team distributed assignments to lead officials in offices or bureaus throughout the department. These officials then assigned staff to complete the tasks. When the completed work had interoffice concurrence, it was then passed back up the chain of command. The NEPDG agency staff contact then reviewed and approved all agency submissions before releasing them to the Principals, the Support Group, or other agencies for review or comment. Agency staff contacts also held regular update meetings with the coordination team and provided assorted updates and briefings to the agency Principal. Not all agencies experienced the same workload. For example, DOE, which was assigned the lead role in developing multiple chapters, had greater responsibilities, more meetings to attend, and larger efforts to coordinate than some other agencies, such as Interior, that played more of an advisory role. Frequent interaction also took place between agencies in developing the report chapters. More than 80 DOE employees from eight departmental offices had direct input into the development of the National Energy Policy report, including science specialists and representatives with significant science expertise. DOE’s Senior Policy Advisor to the Secretary led the department’s internal effort to develop information for an interim and final report, and to identify policy recommendations for the report. The official joined the Acting Director of the then Office of Policy in periodic meetings with the Support Group staff and other agency officials to discuss drafts of specific chapters. In addition, the official joined DOE Office of Policy and program officials to relay comments from NEPDG meetings and to coordinate writing activities within DOE. The Acting Director of the Office of Policy, who was responsible for the day-to-day coordination and management of the process of producing DOE’s contributions to the NEPDG effort, led a coordination team of senior managers from the department’s Office of Energy Efficiency and Renewable Energy, Office of Nuclear Energy, Office of Fossil Energy, Office of Policy, Office of International Affairs, Energy Information Administration, and the Bonneville Power Administration. The team was charged with coordinating the writing of chapters, and each office formed a similar group within their areas of expertise to write its respective chapters. The Office of Policy took the lead on chapter 1 (Taking Stock), Energy Efficiency took the lead on chapter 4 (Using Energy Wisely) and chapter 6 (Nature’s Power), and Fossil Energy took the lead on chapter 5 (Energy for a New Century). In addition, DOE contributed draft sections to chapters for which other agencies had been assigned the lead role. Each office developed recommendations and, after internal discussions, forwarded them for high-level review within DOE before they were released to the NEPDG Principals for review. DOE staff researched historical information about energy and energy markets; identified key energy issues; examined and analyzed the current situation in energy markets; discussed likely energy issues, such as energy production, conservation and energy efficiency, energy prices, renewable and alternative energy sources, and national energy security; and prepared issue papers, memoranda, and talking points relating to these subjects. They also assisted with writing and reviewing drafts of report chapters, providing supporting statistical and other information, reviewing and responding to comments from other executive branch components, fact- checking, developing citations and graphics, and briefing the Secretary on energy policy issues. Interior was not assigned a lead role in writing any of the report chapters. The department’s relationship with NEPDG, including the Working Group and Support Group staff, therefore consisted of the discussions at Principals’ and Working Group meetings, comments on drafts, provision of an options paper, and responses to questions from NEPDG staff. To support the NEPDG effort, Interior’s Office of Policy Analysis formed an energy task force comprised of 11 issue teams to examine opportunities to make more energy available from public lands and to streamline and improve various planning and permitting processes for facilitating energy development. Approximately 100 Interior employees, representing 13 departmental offices or bureaus, helped to develop information for the NEPDG effort. These teams helped develop an internal paper that agency officials used during Working Group discussions of other agencies’ draft chapters. EPA’s general role was to ensure that environmental issues were accurately and adequately addressed and reflected in the development of the report. More than 110 EPA employees participated in the agency’s internal NEPDG efforts. EPA’s Associate Administrator for Policy, Economics, and Innovation served as the lead manager of the agency’s NEPDG activities, overseeing its role in drafting the report chapter on the environment (Protecting America’s Environment) and analyzing environmental issues contained in the other draft chapters of the report. This EPA official and two senior managers from the Office of Air and Radiation worked closely with senior staff from other offices within EPA and senior officials from other contributing agencies. The office leads circulated the draft to others, usually to staff within their particular office, as they deemed appropriate. The managers reviewed documents each time EPA staff prepared or revised them. Upon approval, EPA’s draft was then conveyed to the Support Group. The NEPDG Principals, Support Group, Working Group, and participating agency officials met with, solicited input from, or received information and advice from a variety of nonfederal energy stakeholders while developing the National Energy Policy report. According to our analysis of agency documents produced under court order, stakeholder involvement in the NEPDG process included private citizens offering general energy advice to the President, industry leaders submitting detailed policy recommendations to NEPDG, and individual meetings with Principals as well as the Vice President. The extent to which submissions from any of these stakeholders were solicited, influenced policy deliberations, or were incorporated into the final report is not something that we can determine based on the limited information at our disposal. Nor can we provide a comprehensive listing of the dates or purposes of these meetings, their attendees, or how the attendees, when solicited, were selected, because of OVP’s unwillingness to provide us with information. The Principals met with a variety of nonfederal entities to discuss energy issues and policy. DOE reported that the Secretary of Energy discussed national energy policy with chief executive officers of petroleum, electricity, nuclear, coal, chemical, and natural gas companies, among others. The Secretary of Energy also reportedly asked nonfederal parties for their recommendations for short- and long-term responses to petroleum product price and supply constraints. Several corporations and associations, including Chevron, the National Mining Association, and the National Petrochemical & Refiners Association, provided the Secretary of Energy with detailed energy policy recommendations. EPA reported that agency managers—including the EPA Administrator—held many meetings with outside parties, where the issue of energy policy was raised. For example, according to the Administrator’s schedule, the Administrator and agency staff met separately with the Alliance of Automobile Manufacturers, the Edison Electric Institute, and a group of environmental and conservation leaders. Interior reported that the Secretary of the Interior and staff attended meetings with private industry to discuss energy issues, including one meeting with Rocky Mountain-based petroleum companies interested in leasing federal lands and another meeting with an Indian tribe from Pyramid Lake, Nevada interested in building a power plant on its lands. In addition, in its response to a congressional inquiry, OVP reported that the Vice President met with the chairman and chief executive officer of Enron Corporation to discuss energy policy matters. The Vice President also received a lobbying group’s appeal to stop treating carbon dioxide as a pollutant and policy recommendations from a coalition of utilities, coal producers and railroads calling itself the Coal-Based Generation Stakeholders. We cannot determine the extent to which any of these communications with NEPDG Principals affected the content or development of the final report. In response to another congressional inquiry, the NEPDG executive director reported that the Support Group staff held meetings with individuals involved with companies or industries, including those in the electricity, telecommunications, coal mining, petroleum, gas, refining, bioenergy, solar energy, nuclear energy, pipeline, railroad and automobile manufacturing sectors; environmental, wildlife, and marine advocacy; state and local utility regulation and energy management; research and teaching at universities; research and analysis at policy organizations; energy consumers, including consumption by businesses and individuals; a major labor union; and about three dozen Members of Congress or their staffs. However, the NEPDG executive director did not specify the frequency, length, or purpose of the meetings, or how participants were selected to attend. In addition, OVP reported that the Support Group staff also met with numerous nonfederal stakeholders during the development of the final report, including a meeting with representatives of various utilities and two meetings with representatives of Enron Corporation. Finally, senior agency officials participated in numerous meetings with nonfederal energy stakeholders to discuss the national energy policy. Based on our analysis of the agency documents produced under court order, senior DOE officials, in addition to attending meetings with the Secretary of Energy, met with a variety of industry representatives, lobbyists, and energy associations, including the American Coal Company, Small Refiners Association, the Coal Council, CSX, Enviropower, Inc., Detroit Edison, Duke Energy, the Edison Electric Institute, General Motors, the National Petroleum Council, and the lobbying firm of Barbour, Griffith & Rogers. These senior DOE officials also solicited recommendations, views, or points of clarification from other parties. For example, one senior DOE official solicited detailed energy policy recommendations from a variety of nonfederal energy stakeholders, including the American Petroleum Institute, the National Petrochemical and Refiners’ Association, the American Council for an Energy-Efficient Economy, and Southern Company. This official also received policy recommendations from others, including the American Gas Association, Green Mountain Energy, the National Mining Association, and the lobbying firms the Dutko Group and the Duberstein Group. Senior EPA officials, in addition to accompanying the Administrator to meetings with nonfederal energy stakeholders, discussed issues related to the development of an energy policy at meetings with the Alliance of Automobile Manufacturers, the American Public Power Association, and the Yakama Nation Electric Utility. Interior told us that senior agency officials met with nonfederal parties to discuss energy policy or other energy-related issues, but provided us with no further details about these meetings. In addition to the meetings listed above, the agency documents reveal that the NEPDG Principals, Support Group, and agency staff received a considerable amount of unsolicited advice, criticisms, meeting requests, and/or recommendations from other parties, including private citizens; university professors; local, state, and international officials; regional energy stakeholders; and a variety of interest groups representing energy- related causes. Again, because of the limited information available to us, we cannot determine the extent to which these communications affected the content or development of the final report. The National Energy Policy report was developed in two distinct phases, in accordance with the general criteria defined in the President’s January 29, 2001, memorandum. The first phase involved the development of an outline; the distribution of research and writing assignments to participating agencies; and the development of narrative, topical chapters that ultimately formed the basis of the final report. The first phase culminated in a March 19, 2001, presentation to the President on energy supply disruptions and their regional effects. In the second phase, agency officials reviewed and finalized draft chapters; consolidated a list of options and recommendations and discussed them with the Working Group; and developed short position papers on each of the recommendations that the Working Group considered to be controversial. These papers served as the primary basis for discussion at subsequent Principals’ meetings. After the final meeting of the Working Group on April 3, 2001, the Support Group took the provided materials under consideration and drafted the final report. Agency officials had a final opportunity to review the partial draft of the recommendations before the report was finalized, published, and presented to the President on May 16, 2001, as the National Energy Policy. In the first week of the administration, the Vice President worked with the soon-to-be-named NEPDG executive director to define the process for developing a proposed national energy policy. They decided that a group of senior federal officials would generate an interim report that would detail energy supply problems and a final report that would outline solutions. The President’s memorandum, released on January 29, 2001, reflected this work plan. In early February 2001, the NEPDG executive director distributed a memorandum at the first Working Group meeting detailing the group’s mission, reporting requirements, and a proposed structure of seven targeted interagency workgroups to review specific issue areas. At the meeting, the Support Group named lead agencies to coordinate the development of each of the 10 assigned chapters. The Support Group tasked the lead agencies—DOE, DOT, EPA, Treasury, and the State Department—with developing a report outline for each of their assigned chapters to be forwarded to the White House for final approval. The Support Group instructed agencies to write chapters without proposing improvements, noting that the draft chapters would not be sent to the President, but would serve as the basis of a more detailed version that NEPDG would use when drafting the final report. While the drafting of chapters for the final report continued, the Support Group, Working Group, and participating agency staff focused much of their collective effort throughout February on developing sections of an interim report. The Support Group released the interim report to the Principals for review in early March 2001, then shifted its attention to the second phase of the process—finalizing the draft and making recommendations. The interim briefing, which took place at the White House on March 19, 2001, mostly consisted of oral presentations on the energy supply and demand situation and short-term regional energy supply disruptions. Immediately following the March 19, 2001, presentation of the interim report to the President, the Working Group met to refine the chapters of the final report and to discuss potential recommendations that agencies had accumulated. The Support Group provided the agencies with a copy of the Bush-Cheney energy-related initiatives developed during the presidential campaign, asking them to ensure that they incorporated these initiatives when developing their respective recommendations. They asked each agency in the Working Group to prepare an “option paper” that included proposals for streamlining energy production and steps to implement them. In March 2001, the Working Group continued to develop chapters and discuss recommendations, and pared down each agency’s list of potential recommendations. The Support Group prepared five one-page issue paper summaries of the recommendations that the Working Group considered to be controversial—a multi-pollutant strategy, fuel efficiency standards, energy efficiency, nuclear energy, and the moratoria on Outer Continental Shelf leasing—to the Principals for further discussion. Shortly before the April 3, 2001, Principals’ meeting, the Support Group added a last-minute agenda item to be discussed with the other recommendations. The actual agenda item, however, had been redacted from the documents that we reviewed. In early April 2001, the Support Group stopped accepting comments on the proposals and began sorting through them, asking agencies to incorporate what the Support Group deemed to be the less controversial recommendations into the draft chapters. For the remainder of April 2001, the Support Group mostly worked alone, selecting recommendations to present to NEPDG Principals and rewriting the chapters to fit the recommendations. The Principals met to discuss several of the potentially more controversial recommendations and to decide which proposals to add to the chapters. In some cases, agencies were told to rewrite sections of the chapters to incorporate the proposed recommendations. The agencies continued to draft their chapters and incorporate various other agencies’ comments until the Support Group issued a deadline and requested the final submission of chapters for editing. The Support Group then released the drafts to all of the agencies for a cursory review, informing agency officials that the drafts were now considered “final” and that only high priority comments would be accepted. The Support Group asked agencies to protect their lists of proposed recommendations, instructing officials to hold all proposals closely and not to circulate them. The Support Group then sent the draft chapters to the agencies without any recommendations. On April 30, 2001, the Support Group invited each agency’s Principal or chief of staff to visit the White House for an on-site review of the final draft recommendations. The Support Group continued to make last-minute alterations to the report to incorporate revised recommendations, called on the agencies to verify facts and to provide citations, and ushered the final draft through the editing and printing processes. On May 16, 2001, the Vice President presented the final National Energy Policy report to the President. The final report contained over 100 proposals to increase the nation’s energy supply. The presentation brought the National Energy Policy report development process to a close. None of the key federal entities involved in the NEPDG effort provided us with a complete accounting of the costs they incurred during the development of the National Energy Policy report. Several agencies provided us with rough estimates of their respective NEPDG-related costs; but these estimates, all calculated in different ways, were not comprehensive. The two federal entities responsible for funding the NEPDG effort—OVP and DOE—did not provide us with the comprehensive cost information we requested. OVP provided us with 77 pages of information, two-thirds of which contained no cost information, while the remaining one-third contained miscellaneous information of little to no usefulness. In response to our requests seeking clarification on the provided information, OVP stated that it would not provide any additional information. DOE, EPA, and Interior provided us with their estimates of costs associated with the NEPDG effort, which aggregated about $860,000. DOE provided us with selected cost information, including salary estimates, printing and publication costs, and other incidental expenses. EPA and Interior provided salary cost estimates for some of their senior officials involved in the report’s development. DOE and Interior officials reported that although most of the identified costs were salary-oriented, employees had not specifically recorded the amount of time they had spent on NEPDG-related tasks because many of them already worked on energy policy and thus would have likely conducted a substantial portion of the work, even without the NEPDG project taking place. An Interior official cautioned us not to expect a precise estimate, noting that the estimate primarily had been based on employee recollection and guesswork. In his January 29, 2001, memorandum that established NEPDG, the President instructed the Vice President to consult with the Secretary of Energy to determine the need for funding. DOE was to “make funds appropriated to the Department of Energy available to pay the costs of personnel to support the activities of the Energy Policy Development Group.” The memorandum further stated that if DOE required additional funds, the Vice President was to submit a proposal to the President to use “the minimum necessary portion of any appropriation available to the President to meet the unanticipated need” or obtain assistance from the National Economic Council staff. In response to our inquiry about the NEPDG’s receipt, disbursement, and use of public funds, OVP provided us with 77 pages of “documents retrieved from the files of the Office of the Vice President responsive to that inquiry.” The Vice President later referred to these documents as “responsive to the Comptroller General’s inquiry concerning costs associated with the Group’s work.” Our analysis of the documents, however, showed that they responded only partially to our request. The documents that OVP provided contain little useful information or insight into the overall costs associated with the National Energy Policy development. Of the 77 pages that we received, 52 contained no cost information while the remaining 25 contained some miscellaneous information of little to no usefulness. For example, OVP provided us with two pages illustrating a telephone template and four pages containing indecipherable scribbling, but no discernible cost information. OVP also provided documents that contained some miscellaneous information, predominantly reimbursement requests, assorted telephone bills and random items, such as the executive director’s credit card receipt for pizza. In response to our requests seeking clarification of the provided information, OVP stated that it would not provide us with any additional information. Consequently, we were unable to determine the extent to which OVP documents reflected costs associated with the report’s development. DOE reported spending about $300,000 on NEPDG-related activities, more than half of which was used for the salaries of its employees detailed to OVP and two designated DOE staff contacts for the period from January 29, 2001, through May 29, 2001. DOE reported spending most of the remaining funds to print and produce 10,000 policy publications and graphic support, pay for 16 large briefing boards, and reimburse the NEPDG executive director for his lodging and per diem expenses. DOE did not provide any information on the Support Group members’ requests for the reimbursement of taxi, parking, meal, or duplicating expenditures contained in the 77 pages of OVP documents. However, DOE officials noted that the department did not pay for furniture, telephones, or other expenses that DOE employees on the Support Group may have incurred setting up their offices, saying that they assumed that the White House paid these costs. EPA reported spending an estimated $131,250 in NEPDG-related costs to pay the salaries of the officials most involved in NEPDG activities. EPA officials calculated this estimate by taking the number of full-time equivalents, the officials’ average annual salaries, and prorating the amount for the 3½ months they spent working on the NEPDG effort. EPA officials also reported that the agency incurred multiple incidental expenses in helping to prepare the NEPDG report, such as taxi fares, duplication costs, and courier fees, but they neither itemized these expenditures nor provided us with any further documentation. Interior reported spending an estimated $430,000 on salary-related costs associated with the NEPDG report development. It also reported that it did not incur any NEPDG-related contracting costs. The agency official who provided us with the estimate warned that although it was the best possible, its precision was uncertain because it had been based on employees’ personal recollections and guesswork as to the amount of time they spent working on NEPDG-related activities. The official then added an additional 20 percent to the estimated sum to reflect the employee benefits that accrued during the period. Interior did not create a unique job code or accounting process to track the time that Interior employees spent on developing the NEPDG report. According to one official, many of the staff involved with the NEPDG effort already worked on energy policy for their respective bureaus or offices and thus a substantial portion of the work would likely have been conducted, even without the NEPDG project taking place. We provided DOE, Interior, and EPA with an opportunity to review and comment on a draft of this report. Representatives from each of these three agencies reviewed the report and chose not to provide written comments. Interior and EPA provided several technical clarifications orally, which we incorporated, as appropriate, into the final report. We also provided OVP with an opportunity to review and comment on our draft report, but the office did not avail itself of the opportunity. We conducted our review from May 2001 through July 2003. We plan no further distribution of this report until August 25. On that date, we will send copies of this report to interested congressional committees. This report is also available on GAO's home page at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix I. In addition to the individuals named above, Doreen Feldman, Lynn Gibson, Richard Johnson, Bob Lilly, Jonathan S. McMurray, Susan Poling, Susan Sawtelle, Amy Webbink, and Jim Wells made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
|
On January 29, 2001, the President established the National Energy Policy Development Group (NEPDG)--a group of cabinet-level and other senior administration officials, chaired by the Vice President--to gather information, deliberate, and recommend a national energy policy. The group presented its final report to the President in May 2001. GAO was asked to (1) describe the process used by the NEPDG to develop the National Energy Policy report, including whom the group met with and what topics were discussed and (2) determine the costs associated with that process. Although appointed NEPDG Chair, the Vice President elected not to respond to GAO's request for certain factual NEPDG information. Accordingly, as authorized by GAO's access-torecords statute, and after exhausting efforts to achieve a resolution and following the processes specified in that statute, GAO filed suit in U.S. District Court to obtain the information. The district court later dismissed GAO's suit on jurisdictional grounds, without reaching the merits of GAO's right to audit and evaluate NEPDG activities or to obtain access to NEPDG records. For a variety of reasons, GAO decided not to appeal the district court decision. DOE, Interior, and EPA reviewed the draft report and chose not to comment. OVP declined an offer to review the draft and comment. According to the best information that GAO could obtain, the National Energy Policy report was the product of a centralized, top-down, short-term, and labor-intensive process that involved the efforts of several hundred federal employees governmentwide. In the 3 = months between the inception of NEPDG and its presentation of the final report, the Principals (the Vice President, selected cabinet-level and other senior administration officials) and their support staff (Support Group) controlled most facets of the report's development, including setting meeting schedules and agendas, controlling the workflow, distributing work assignments, rewriting chapters, and approving recommendations. Senior agency officials served on a select interagency Working Group, while the majority of agency staff working on the NEPDG effort played a tributary role, helping their agencies fulfill their NEPDG-related obligations and responding to the Support Group's subsequent requests for information, review, or comment. In developing the National Energy Policy report, the NEPDG Principals, Support Group, and participating agency officials and staff met with, solicited input from, or received information and advice from nonfederal energy stakeholders, principally petroleum, coal, nuclear, natural gas, and electricity industry representatives and lobbyists. The extent to which submissions from any of these stakeholders were solicited, influenced policy deliberations, or were incorporated into the final report cannot be determined based on the limited information made available to GAO. NEPDG met and conducted its work in two distinct phases: the first phase culminated in a March 19, 2001, briefing to the President on challenges relating to energy supply and the resulting economic impact; the second phase ended with the May 16, 2001, presentation of the final report to the President. The Office of the Vice President's (OVP) unwillingness to provide the NEPDG records or other related information precluded GAO from fully achieving its objectives and substantially limited GAO's ability to comprehensively analyze the NEPDG process. None of the key federal entities involved in the NEPDG effort provided GAO with a complete accounting of the costs that they incurred during the development of the National Energy Policy report. The two federal entities responsible for funding the NEPDG effort--OVP and the Department of Energy (DOE)--did not provide the comprehensive cost information that GAO requested. OVP provided GAO with 77 pages of information, two-thirds of which contained no cost information while the remaining one-third contained some miscellaneous information of little to no usefulness. OVP stated that it would not provide any additional information. DOE, the Department of the Interior, and the Environmental Protection Agency (EPA) provided GAO with estimates of certain costs and salaries associated with the NEPDG effort, but these estimates, all calculated in different ways, were not comprehensive.
|
The military services and VA have medical requirements that servicemembers must meet when leaving the military and applying for VA disability compensation. These requirements include a medical assessment; a service-specific separation exam, which is given to some servicemembers; and a VA C&P exam. The single separation exam program is designed to provide a single physical exam that can be used to meet the physical exam requirements of the military services and VA. In response to a 1994 memorandum from the Assistant Secretary of Defense for Health Affairs, all of the military services require a medical assessment of all servicemembers leaving the military, including those that retire or complete their tour of active duty. This assessment, which is used to evaluate and document the health of these servicemembers, consists of a standard two-page questionnaire asking servicemembers about their overall health, medical and dental histories, current medications, and other health-related topics. (See app. II for DOD’s medical assessment form—DD Form 2697.) Military medical personnel, who could include a physician, a physician’s assistant, or a nurse practitioner, are required to review the questionnaire with the servicemember. If the questionnaire indicates the presence of an illness, injury, or other medical problem, the reviewer is required to ensure that the servicemember’s medical or dental records document the problem. In addition, depending on the servicemember’s responses or based on the reviewer’s judgment that additional information is needed, the health assessment could result in a physical exam—one focused on a particular health issue or issues in order to supplement information disclosed on the questionnaire. Furthermore, the medical assessment asks if the servicemember intends to file a claim for disability with VA. Servicemembers who answer “yes” on the assessment form will be given a clinically appropriate assessment or exam if the servicemember’s last physical exam received during active duty is more than 12 months old or if new symptoms have appeared since the last active duty exam. In addition, the Army, Navy, Air Force, and Marines require some of their servicemembers to undergo separation exams when they leave the military. Separation exams consist of a clinical evaluation by a medical provider and could include various diagnostic tests, such as a urinalysis, a hearing test, and a vision test. Separation exams, as well as other physical exams the military services conduct, are documented on a three-page standard DOD form. (See app. III for DOD’s report of medical examination—DD Form 2808.) According to DOD, the average cost for a physical exam given by the military services is about $125, exclusive of any diagnostic tests that may also be conducted. The requirements determining which servicemembers must receive separation exams vary by military service and other factors. The Army requires that its retirees receive separation exams, although the Army does not usually require this for servicemembers who are completing their tours of active duty. The other military services do not require separation exams for most servicemembers, except for those whose last physical exam or assessment they received during active duty is out of date. (See table 1 for each military service’s medical evaluation requirements.) Further, all of the military services also require separation exams for certain occupational specialties. For example, the military services require separation exams for servicemembers who have worked with hazardous materials. Finally, any servicemember can request and receive a separation exam. Requirements for separation exams may be affected by planned changes to physical exam requirements for active duty servicemembers. The Army and Navy plan to change their physical exam requirements for servicemembers during active duty—replacing routine physical exams with periodic health assessments, thereby moving closer to the Air Force’s requirements for active duty servicemembers. In September 2003, the Armed Forces Epidemiology Board (AFEB) issued a report that concluded that annual health assessments, as currently administered by the Air Force to active duty servicemembers, should replace routine physical exams. According to their Surgeon General representatives, the Army and the Navy intend to change their regulations relating to periodic physical exams and to adopt the recommendations offered by the AFEB by 2005. This shift in requirements is in line with recommendations of the U.S. Preventive Services Task Force and many other medical organizations, which no longer advocate routine physical exams for adults—recommending instead a more selective approach to detecting and preventing health problems. Some servicemembers who leave the military file for VA disability benefits, which could include priority access to VA health care as well as monthly payments for disabilities, diseases, or injuries incurred or aggravated during active military service. VA requires evidence of military service to confirm eligibility for these benefits, and the department uses the C&P exam to establish a disability rating, which helps determine the amount of compensation a veteran receives. Veterans retain the option of initiating claims at any time after leaving the military, even if they did not state their intention to do so on the medical assessment form completed when they left military service. A VA C&P exam is a physical exam used to determine a veteran’s degree of disability in support of claims for service-connected disability compensation. The exam obtains information on the veteran’s medical history and includes diagnostic and clinical tests, the scope of which depend on what disabilities the veteran claims. For example, if a veteran claims a disability for a knee injury, VA would require a comprehensive orthopedic exam to determine the percent of movement that has been lost due to the knee injury. Veterans may claim multiple disabilities—all of which must be evaluated for disability rating purposes. In general, VA’s C&P exam is more comprehensive and detailed than the military services’ separation exams, as military service exams are intended to document continued fitness for duty, whereas the purpose of the VA C&P exam is to document disability or loss of function regardless of its impact on fitness for duty. VA physicians who conduct the C&P exam must evaluate the extent of a veteran’s physical limitations and determine their impact on the veteran’s future employment for compensation purposes. VA physicians usually conduct C&P exams at VA Medical Centers, although since 1996 VA has had authority to use civilian physicians to provide C&P exams at 10 VA regional offices. In addition, VA physicians may provide C&P exams at some military medical facilities. According to VA officials, the average cost of VA’s C&P exam, exclusive of any diagnostic tests, is about $400 when conducted by either VA or by VA’s contractor. In 1994, the Army and VA jointly initiated a pilot program for single separation exams at three Army installations. Each of the installations used a different approach when implementing the exam. At Fort Hood, Texas, a VA physician performed single separation exams at the Army’s military treatment facility. At Fort Knox, Kentucky, a sequential approach was used in which Army personnel performed some preliminary work, such as lab tests and optical exams, for servicemembers at the installation. Servicemembers were then transported to a local VA medical center, where VA physicians completed the single separation exams. At Fort Lewis, Washington, an Army physician performed the single separation exams at the military installation. The 1997 report on the pilot programs concluded that all of the approaches for single separation exams were successful and that, overall, they eliminated redundant physical exams and medical procedures, decreased resource expenditures, increased the timeliness of VA’s disability rating decisions, and improved servicemembers’ satisfaction. The report also recommended that single separation exam programs be expanded to include all military services. Based on the findings of the single separation exam pilot, VA’s Under Secretary for Health and DOD’s Acting Assistant Secretary of Defense for Health Affairs signed an MOU in 1998 directing local VA offices and military medical facilities to negotiate and implement individual MOUs for single separation exam programs. According to the MOU, VA and the military services should optimize available resources, including the use of both military and VA facilities and staff as appropriate. For example, because a servicemember applying for VA benefits would receive a single physical exam that meets VA C&P exam requirements—which are usually more extensive than the military services’ separation exam requirements— the MOU envisioned that VA medical personnel would perform most of the single separation exams. It also stated that the military services would provide VA with servicemembers’ medical records and lab and test results from active duty in order to avoid duplicative testing. Finally, the MOU acknowledged that in implementing single separation exam programs, negotiations between local VA and military officials would be necessary, because military installations and local VA offices and hospitals face resource limitations and competing mission priorities. These local level negotiations would be documented in individual MOUs. To implement the 1998 MOU, both VA and DOD issued department- specific guidance. In January 1998, both VA’s Under Secretary for Health and Under Secretary for Benefits distributed guidelines to VA regional offices and medical centers about completing the single separation exams in cooperation with the military services. In September 1998, DOD’s Assistant Secretary of Defense for Health Affairs issued a policy to the Assistant Secretaries for the Army, Navy, and Air Force stating that servicemembers who leave the military and intend to file a claim for VA disability benefits should undergo a single physical exam for the military services and VA. Since 1998, VA and the military services have collaborated to establish single separation exam programs using various approaches to deliver the exams, including those used in the original pilot program. However, while we were able to verify that the exams were being delivered at some installations, DOD, its military services, and VA either could not provide information or provided us with inaccurate information on program sites. Although VA reported that 28 of 139 BDD sites had programs in place as of May 2004, we found that 4 of the 8 sites we evaluated from VA’s list did not actually have a program in place. Nonetheless, VA and DOD leadership continue to encourage the establishment of single separation exam programs and have drafted a new MOA that contains a specific implementation goal to have programs in place at all of the BDD sites by December 31, 2004—an ambitious goal given the seemingly low rate of program implementation since 1998 and the lack of accurate information on existing programs. VA reported that as of May 2004, 28 of the 139 BDD sites had operating single separation exam programs. At these sites, VA officials told us, local VA and military officials have implemented the program using one of five approaches that met both the military services’ and VA’s requirements without duplication of effort. Three of the five approaches were developed during the 1994 pilot program—(1) military physicians providing the exams at military treatment facilities, (2) VA physicians providing the exams at military treatment facilities, and (3) a sequential approach wherein VA and the military service shared the responsibility of conducting consecutive components of a physical exam. In addition, VA officials reported a fourth approach that was being used, in which VA physicians delivered the single separation exam at VA hospitals, and a fifth approach, in which VA used a civilian contractor to deliver the exams. We evaluated the operation of the single separation exam programs at four of the military installations VA reported as having collectively conducted over 1,400 exams in 2003. These installations were conducting single separation exams using two of the approaches—either with VA’s contractor conducting the physical exam or as a sequential approach. (See table 2.) Overall, VA and military officials told us that both approaches worked in places where military officials and VA officials collaborated well together. At two Army installations—Fort Stewart and Fort Eustis—we found that VA used its civilian contractor to conduct C&P exams, which the Army then used to meet its separation exam requirements for servicemembers leaving the military. At the Fort Drum Army installation and Naval Station Mayport, local VA and military service officials collaborated to implement a sequential approach. At Fort Drum, the Army starts the single separation exam process by conducting hearing, vision, and other diagnostic testing. A VA physician subsequently completes the actual physical exam at the installation, which is then incorporated in the servicemember’s medical record. At Naval Station Mayport, a Navy corpsman starts the sequential process by reviewing the servicemember’s medical history, initiating appropriate paperwork, and scheduling the servicemember for an appointment with a VA physician. The VA physician then conducts a VA C&P exam at the installation and completes the paperwork to meet the Navy’s separation requirements. DOD and its military services do not adequately monitor where single separation exam programs have been established. DOD does not maintain servicewide information on the locations where single separation exam programs are operating. While the Army and the Air Force each provided a list of installations where officials claimed single separation exam programs were established, both lists included installations that we verified as not having a program in place. A Navy official told us that although the Navy attempted to identify the locations of single separation exam programs, its information was not accurate. In addition, while VA maintains a list of single separation exam programs, this list was not up to date. At our request, VA attempted to update their list and reported to us that in May 2004, 28 military installations with BDD programs also had single separation exam programs. At these sites, VA reported that over 11,000 single separation exams had been conducted in 2003. However, when we evaluated programs at 8 of these installations, we found that 4 of the installations did not actually have programs in place. (See table 3.) At these four military installations, the 2,075 exams reported as single separation exams were actually VA C&P exams that were used only by VA and not by the military services. We obtained the following information about these installations. At Fort Lee, local Army and VA officials told us that a single separation exam program was in place prior to our site visit. However, during a joint discussion with us, they realized that the local MOU, which was signed in April 2001, was not being followed and that the single separation exam program was no longer in operation. Nonetheless, local VA officials responsible for reporting on the program were unaware that the program was no longer operational. At Little Rock Air Force Base, we found that a single separation exam program was not in place even though there was an MOU, which local VA officials told us was signed in May 1998. During our initial discussions, local VA officials told us that the program was in operation. However, as they responded to VA headquarter’s inquiry to update their list of installations with single separation exam programs for us, local officials realized that the program was not in operation and had never existed despite the signed MOU. Nonetheless, this site was still included on the updated list of installations that VA provided to us. At Pope Air Force Base, local military officials told us that no single separation exam program was in place. Furthermore, a local VA official said that no MOU had been signed for the program at this installation. However, despite this, local VA officials mistakenly believed that installation officials were using the VA C&P exams to meet their separation requirements and that, as a result, single separation exams were being provided. Finally, at Marine Camp Lejeune, local military and VA officials told us that no single separation exams were being conducted even though there was an MOU, which was signed in 2001. When we met with the installation’s hospital commander, he told us that the hospital was not participating in the single separation exam program, and he was unaware of the existence of the MOU for this program. We also met with military officials at the Hadnot Branch Clinic, the installation’s busiest clinic in terms of separation physicals, and at the time of our review, this clinic was also not participating in the single separation exam program. Furthermore, local VA officials told us that they realized that the program was not in operation at the time of our visit—even though it was included on the list that VA updated for us. We also identified another military installation that had a single separation exam program—even though it was not included in VA’s list of installations with these programs. Regional VA officials told us—and we confirmed—that an MOU for a single separation exam program had been implemented at MacDill Air Force Base, Florida. At this installation, local military officials reported that 516 single separation exams were conducted in 2003. According to local VA and military officials, this installation employs a sequential approach wherein VA uses medical information from Air Force health assessments as well as any diagnostic tests that may have been conducted in conjunction with them to help complete C&P exams for servicemembers applying for VA disability compensation. As part of an overarching effort to streamline servicemembers’ transition from active duty to veterans’ status, VA and DOD continue to encourage the establishment of single separation exam programs and have drafted a national MOA, which is intended to supercede the 1998 MOU. Unlike the original MOU, the draft MOA contains a specific implementation goal— that VA and the military services establish single separation exam programs at each of the installations with BDD programs by December 31, 2004. The draft MOA also provides more detail about how the military services and VA will share servicemembers’ medical information to eliminate duplication of effort. For example, the MOA states that the military services will share the medical assessment forms along with any completed medical exam reports and pertinent medical test results with VA. Similarly, the MOA specifies that when VA conducts its C&P exam of servicemembers before they leave the military, this information should be documented in servicemembers’ military medical records. According to VA officials, the draft MOA extends the eligibility period for servicemembers to participate in the program by eliminating the previous requirement that servicemembers had to have a minimum number of days—usually 60—remaining on active duty. As a result, servicemembers may participate in the program when they have 180 days or less remaining on active duty. Aside from some specific additions, the general guidance in the draft MOA is consistent with the 1998 MOU. For example, the draft MOA delegates responsibility for establishing single separation exam programs to local VA and military installations, based on the medical resources—including physicians, laboratory facilities, examination rooms, and support staff— available to conduct the exams and perform any additional testing. The MOA also continues to provide flexibility that allows local officials to determine how the exams will be delivered—by VA, by VA’s contractor, or by DOD. According to VA, the draft MOA is expected to be signed by DOD’s Under Secretary of Defense for Personnel and Readiness and the Deputy Secretary of VA in November 2004. In contrast, the 1998 MOU was signed at lower levels of leadership within each department—DOD’s Acting Assistant Secretary of Defense for Health Affairs, who reports to the Under Secretary of Defense for Personnel and Readiness, and VA’s Under Secretary for Health, who reports to the Deputy Secretary of VA. Both VA and DOD officials told us that endorsement for the new draft MOA from higher-level leadership within the departments should facilitate the establishment of single separation exam programs. However, it will be difficult to determine where the program needs to be implemented without accurate program information with which to oversee and monitor these efforts—a critical deficiency in light of the MOA’s ambitious goal to establish the program at all BDD sites by December 31, 2004, and given the seemingly low rate of implementation at the 139 BDD sites. Several challenges impact the establishment of single separation exam programs. The primary challenge is that the military services do not usually require servicemembers to undergo a separation exam before leaving the military. In fiscal year 2003, the military services administered separation exams for an estimated one-eighth of servicemembers who left the military. Consequently, although individual servicemembers may benefit from single separation exams, the military services may not realize benefits from resource savings through eliminating or sharing responsibility for the separation exams. Another challenge to establishing these programs is that some military officials told us that they need their resources, such as space and medical personnel, for other priorities, including ensuring the health of active duty servicemembers. Furthermore, VA officials told us that because single separation exam programs require coordination between personnel from both VA and the military services, existing programs can be difficult to maintain because of routine rotations of military staff to different installations. Despite increased convenience for individual servicemembers, the military services may not benefit from single separation exam programs—designed to eliminate the need for two separate exams—because the military services usually do not require servicemembers who are leaving the military to have separation exams. In fiscal year 2003, the military services administered separation exams to an estimated 23,000, or one-eighth, of the servicemembers who left the military that fiscal year. However, this estimate may undercount the number of servicemembers who received separation exams. (See fig. 1.) Because the military services do not usually require separation exams, it is unlikely that servicemembers will receive physical exams from both the military and VA. At two Army installations without single separation exam programs, we found that relatively few servicemembers had received both a C&P exam from VA and a separation exam from the Army. From June 2002 through May 2004, 810 servicemembers received a VA C&P exam at Fort Gordon, and of these, 121 soldiers—about 15 percent—had also received a separation exam from the Army. Similarly, during June 2003 through May 2004, 874 servicemembers received a VA C&P exam at Fort Bragg, and of these only 38—about 4 percent—had also received a separation exam from the Army. Because the Army is the only military service to require separation exams for all retirees, we expected that the Army’s servicemembers were more likely those of the other military services to receive two physical exams. However, the small percentage of servicemembers that received both VA C&P exams and Army separation exams at these two installations suggests that the potential for resource savings by having single separation exams is likely small. In addition, some Air Force officials told us that they did not see a need to participate in single separation exam programs because of their health assessment requirements. For example, at Little Rock Air Force Base, officials told us that because the Air Force does not routinely require separation physicals for most servicemembers, it was not practical to use VA’s C&P physicals as single separation exams. The officials explained that VA’s C&P exams obtain more information than needed to meet the Air Force’s health assessment requirement and that using VA’s exam as a single separation exam would not be an efficient use of resources. The officials said that it would take military medical personnel too much time to review the VA C&P exams to identify the information the Air Force required. Similarly, officials at other Air Force installations we visited— Hurlburt Field, Langley Air Force Base, and Eglin Air Force Base—agreed that they would not benefit from a single separation exam program. However, we did find one Air Force installation—MacDill Air Force Base—where a single separation exam program was operational, demonstrating the feasibility of Air Force installations participating in single separation exam programs. Some military officials told us that they use their installations’ resources for other priorities than establishing single separation exam programs. Although the 1998 MOU encouraged the establishment of these programs for servicemembers leaving the military and filing VA disability claims, some local military officials told us that their installations did not currently have these programs because they decided to use available resources to support other efforts, such as conducting wartime training and ensuring that active duty servicemembers are healthy enough to perform their duties. For example, when we visited Fort Bragg we learned that the commander had initially agreed to provide space at his installation for a single separation exam program. However, the same space was committed to more than one function, and when the final allocation decision was made, other mission needs took priority. In addition, Nebraska VA officials told us that an existing single separation exam program was eliminated at Offutt Air Force Base because military medical personnel assigned to help VA physicians administer the exams were needed to focus on the health of active duty servicemembers at the installation. In addition, military officials explained that administering single separation exams that include VA’s C&P protocols are more time intensive for their staff and can involve more testing than the military’s separation exams. As a result, military officials are reluctant to assign resources, including facilities and staff, to this effort. Further, military officials explained that expending time and resources to train military physicians to administer single separation exams is not worthwhile because these physicians periodically rotate to other locations to fulfill their active duty responsibilities so other military physicians would have to be trained as replacements. Because single separation exam programs require coordination between personnel from both VA and the military services, staff changes or turnover can make it difficult to maintain existing programs. For example, during our visit to the Army’s Fort Lee, we found that the installation’s single separation program had stopped operating because of staff turnover. When the program was in operation, a sequential approach was used in which Army personnel conducted the initial part of the exams, which included medical history and diagnostic testing, and then shared servicemembers’ medical records with VA personnel at the VA hospital, where the single separation exams were completed. According to VA and Army officials, after the Army personnel changed, the installation no longer provided VA with the medical records. Further, VA officials told us that maintaining joint VA and DOD programs—such as single separation exam programs—is challenged by the fact that military staff, including commanders, frequently rotate. According to VA officials, some commanders do not want to continue agreements made by their predecessors so single separation programs must be renegotiated when the commands change. However, VA officials told us that the new draft MOA should help alleviate this challenge to program establishment because it states that local agreements between military medical facilities and VA regional offices will continue to be honored when leadership on either side changes. Since 1998, VA and DOD’s military services have attempted to establish single separation exam programs in order to prevent duplication and streamline the process for servicemembers who are leaving the military and intend to file a disability claim with VA. However, according to VA, fewer than 30 out of 139 military installations with BDD programs had single separation exam programs as of May 2004. To encourage more widespread program establishment, the departments have drafted a new national MOA with the goal of having programs in place at all BDD sites by December 31, 2004. Increasing the single separation exam program to all BDD sites will allow more servicemembers to benefit from its convenience. Yet, given the seemingly low rate of program implementation since 1998 and the challenges we identified in establishing and maintaining the program, it is unlikely that the programs will be established at about 100 more sites less than 2 months after the MOA becomes effective. Consequently, both departments will need to monitor program implementation to ensure that the new MOA is put into practice— especially since local agreements for single separation exam programs have not always resulted in the establishment and operation of such programs. To determine where single separation exam programs are established and operating, we recommend that the Secretary of VA and the Secretary of Defense develop systems to monitor and track the progress of VA regional offices and military installations in implementing these programs at BDD sites. We requested comments on a draft of this report from VA and DOD. Both agencies provided written comments that are reprinted in appendices IV and V. VA and DOD concurred with the report’s findings and recommendation. DOD also provided technical comments that we incorporated where appropriate. In commenting on this draft, VA stated that it has actions underway or planned that meet the intent of our recommendation. First, it has established an inspection process of BDD sites to determine compliance with procedures. In addition, VA noted that it has worked with DOD to revise the MOA for single separation exam programs and that it has instructed its regional offices to begin working with military treatment facilities to implement its provisions. Finally, VA said that VA’s and DOD’s joint strategic plan for fiscal year 2005 will include substantive performance measures to monitor the process of moving from active duty to veteran status through a streamlined benefits delivery process. In their written comments, DOD recognized the importance of a shared DOD and VA separation process and its benefits to servicemembers and noted the fact that both departments are working on an MOA to further encourage single separation exams. DOD also stated that the capability to monitor and track the progress of single separation exams has been hampered by the lack of a shared VA and DOD information technology system. However, DOD reported that VA is developing automated reporting tools and will be doing on-site visits to BDD sites, and VA and DOD will share information gathered from this system and site visits. We are sending copies of this report to the Secretary of Defense, the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7119. Other contacts and staff acknowledgments are listed in appendix VI. To identify efforts by the Department of Veterans Affairs (VA) and the military services to establish single separation exam programs for servicemembers who plan to file VA disability claims, we reviewed pertinent legislation and obtained VA’s requirements for compensation and pension (C&P) exams. We also obtained service-specific requirements for periodic physical exams and health assessments and evaluations, especially those requirements pertaining to separating and retiring servicemembers. We obtained and reviewed relevant documentation about both departments’ efforts to establish single separation exam programs. We also interviewed officials from the office of the Assistant Secretary of Defense for Health Affairs, the military services’ Surgeons General, and VA. In addition, we obtained VA’s data on the number of disability claims and the cost data associated with conducting military physical exams and VA C&P exams. Based on our review of these data and subsequent discussions with agency officials, we determined that these data were sufficiently reliable for the purposes of this report. We obtained a list of 28 military installations that VA officials had identified as having single separation exam programs through a survey of their Benefits Delivery at Discharge (BDD) sites. We used this list to select 8 installations to learn how their programs operated. We did not verify whether the remaining 20 installations had single separation exam programs because such verification would have required a full evaluation of actual program operations at these locations. We also did not verify the number of installations with BDD sites or the numbers of single separation exams VA reported for these military installations. We selected installations that represented each of VA’s reported approaches for operating the single exam program—VA physicians conducting the exam at military installations, VA physicians conducting the exam at VA medical centers, Department of Defense (DOD) physicians conducting the exam, VA and DOD using a sequential approach for the exam, and VA’s civilian contractors delivering the exam. The installations we selected represented each of the four branches of the military service—Army, Navy, Air Force, and Marines—and all but one had more than 500 servicemembers leave in fiscal year 2003. We obtained the separation data from the Defense Manpower Data Centers’ (DMDC) Active Duty Military Personnel file on the number of servicemembers who left the military from various separation locations during fiscal year 2003. To assess the reliability of these data, we conducted logic tests to identify inconsistencies, reviewed existing information about it and the system that produced it, and interviewed an agency official who was knowledgeable about the data. We determined the data to be sufficiently reliable for the purposes of this report. From VA’s list we visited seven military installations—Marine Corps Base Camp Lejeune, North Carolina; Fort Eustis, Virginia; Fort Lee, Virginia; Fort Stewart, Georgia; Little Rock Air Force Base, Arkansas; Naval Station Mayport, Florida; and Pope Air Force Base, North Carolina. We also conducted telephone interviews with medical command and VA officials associated with Ft. Drum, New York. Further, we conducted a telephone interview with military and VA officials from MacDill Air Force Base, Florida, which has a single separation exam program but was not on VA’s list. At the installations we visited or contacted, we spoke with medical command officials and with VA officials responsible for the single separation exam program to discuss the different types of local agreements and procedures used for delivering single separation exams. We also reviewed the draft memorandum of agreement (MOA) related to single separation exam programs and interviewed officials from VA, the Office of the Assistant Secretary of Defense for Health Affairs, and the services’ Surgeons General to obtain information on VA and DOD officials’ efforts to draft and implement this MOA. To obtain information on the challenges associated with establishing single separation exam programs, we identified and visited military installations that did not have single separation exam programs. We used DMDC’s separation data for fiscal year 2003 to identify installations representing each of the military services—Army, Navy, Air Force, and Marines—that had more than 500 separations and were not reported by VA as having a single separation exam program. We also visited installations that were located in the same VA regions as installations we visited that VA had reported as having single separation exam programs. The seven military installations we visited were Marine Corps Air Station Cherry Point, North Carolina; Eglin Air Force Base, Florida; Fort Bragg, North Carolina; Fort Gordon, Georgia; Hurlbert Field Air Base, Florida; Langley Air Force Base, Virginia; and Naval Station Norfolk, Virginia. At these installations, we interviewed medical command officials and VA officials to learn whether single separation exam programs had been considered and what the challenges were to establishing them. For the two Army installations included in these seven selected installations—Fort Bragg, North Carolina and Fort Gordon, Georgia—we obtained both the separation exam data and C&P exam data for each installation to determine how many separating servicemembers from each installation received both an Army separation exam and a VA C&P exam. We chose Army installations for this analysis because duplicate service and C&P exams were more likely to occur due to the Army’s requirement that retirees receive a physical exam. After our review of the documentation and subsequent discussions with agency officials, we concluded that these data were sufficiently reliable for the purposes of this report. We also reviewed DOD’s separation exam data and discussed it with an agency official. Based on this information, we concluded that these data were sufficiently reliable for the purposes of this report although it may understate the number of separation exams because some may have been identified more generally as physical exams. To obtain additional information on the challenges to establishing single separation exam programs, we called or visited VA regional offices in 16 locations—Arkansas, California (three regions), Georgia, Florida, Kentucky, Nebraska, New York, North Carolina, Oklahoma, South Carolina, Texas (two regions), Virginia, and Washington—and talked with officials responsible for initiating and implementing these programs. We selected six of these regional offices because they were already involved in establishing single separation exam programs at the eight military installations we selected from VA’s list. We asked these officials about the challenges they encountered when trying to establish these programs at other installations in their regions. We also interviewed officials from the three VA regional offices involved in the pilot program for single separation exams. We talked with officials from seven additional regional offices that had responsibility for military installations with more than 500 separations during fiscal year 2003 to determine how they established programs in their regions and problems they encountered when programs could not be established. We performed our work from January 2004 through November 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the VA November 1, 2004, letter. 1. We used VA’s May 2004 updated list to select our sites, and we found that it contained information that was both incomplete and inaccurate. The list included installations where we did not find single separation exam programs. It also omitted one installation where we found a single separation exam program. 2. We agree that individual servicemembers will benefit from single separation exam programs and have added information to the body of the report to reflect this. 3. We modified this statement as follows: “In general, VA’s C&P exam is more comprehensive and detailed than the military services’ separation exams, as military service exams are intended to document continued fitness for duty, whereas the purpose of the VA C&P exam is to document disability or loss of function regardless of its impact on fitness for duty.” 4. Although VA believed the C&P exam was being used for separation purposes at Pope Air Force Base, it was not. As we reported, VA and DOD had not signed an MOU for a single separation exam program at this installation, and the Air Force was clear that it was not using the C&P exam for separation purposes. 5. While Camp Lejeune’s Hadnot Branch Clinic may currently be conducting single separation exams, at the time of our visit in June 2004, the physician at the Hadnot Clinic told us he was not using VA’s C&P exams for servicemembers’ separation exams. In September 2004, we confirmed this information with the clinic physician. In addition to those named above, key contributors to this report were Krister Friday, Cywandra King, Raj Premakumar, Allan Richardson, and Julianna Williams.
|
Servicemembers who leave the military and file disability claims with the Department of Veterans Affairs (VA) may be subject to potentially duplicative physical exams in order to meet requirements of both the Department of Defense's (DOD) military services and VA. To streamline the process for these servicemembers, the military services and VA have attempted to coordinate their physical exam requirements by developing a single separation exam program. In 1998, VA and DOD signed a memorandum of understanding (MOU) instructing local units to establish single separation exam programs. This report examines (1) VA's and the military services' efforts to establish single separation exam programs, and (2) the challenges to establishing single separation exam programs. To obtain this information, GAO interviewed VA and military service officials about establishing the program; evaluated existing programs at selected military installations; and visited selected installations that did not have programs. Since 1998, VA and the military services have collaborated to establish single separation exam programs. However, while we were able to verify that the program was being delivered at some military installations, DOD, its military services, and VA either could not provide information on program locations or provided us with inaccurate information. As of May 2004, VA reported that 28 military installations had single separation exam programs that used one of five basic approaches to deliver an exam that met both VA's and the military services' requirements. However, when we evaluated 8 of the 28 installations, we found that 4 of the installations did not actually have programs in place. Nonetheless, VA and DOD leadership continue to encourage the establishment of single separation exam programs and have recently drafted a new memorandum of agreement (MOA) that is intended to replace the 1998 MOU. Like the original MOU, the draft MOA delegates responsibility for establishing single separation exam programs to local VA and military installations, depending on available resources. However, the draft MOA also contains a specific implementation goal that selected military installations should have single separation exam programs in place by December 31, 2004. This would require implementation at 139 installations--an ambitious plan given the seemingly low rate of program implementation since 1998 and the lack of accurate information on existing programs. Several challenges impede the establishment of single separation exam programs. The predominant challenge is that the military services may not benefit from a program designed to eliminate the need for two separate physical exams because they usually do not require that servicemembers receive a separation exam. As of August 2004, only the Army had a general separation exam requirement for retiring servicemembers. The other military services primarily require separation exams when the servicemember's last physical exam or medical assessment received during active duty is no longer considered current. In fiscal year 2003, only an estimated 13 percent of servicemembers who left the military received a separation exam. Consequently, the military services may not realize resource savings by eliminating or sharing responsibility for this exam. According to some military officials, another challenge to establishing single separation exam programs is that resources, such as facility space and medical personnel, are needed for other priorities, such as ensuring that active duty servicemembers are healthy enough to perform their duties. Additionally, because single separation exam programs require coordination between personnel from both VA and the military services, military staff changes, including those due to routine rotations, can make it difficult to maintain existing programs.
|
ICE is the largest investigative arm of DHS. ICE is composed of four offices: (1) Investigations, (2) Intelligence, (3) Detention and Removal Operations (DRO), and (4) the Federal Protective Service. As of September 2005, OI had more than 5,600 special agents; about 94 percent of these are assigned to 26 major field offices, headed by Special Agents- in-Charge (SAC), and OI’s foreign attaché offices. These offices and their subordinate units were created using the immigration and customs staff and locations in existence at the time ICE was formed. At headquarters ICE, OI is divided into five divisions as shown in figure 1. Three of the five divisions—National Security, Finance and Trade, and Smuggling and Public Safety—were created to incorporate the core missions and functions of legacy immigration and customs investigations. These divisions and the units within them are to provide a functional line of communication from the Director of OI to the groups in the SAC offices that conduct investigations. Divisions and units within OI headquarters also develop and manage special programs that are implemented in multiple field offices. For example, Project Shield America is a National Security Division program where OI conducts outreach to private sector companies to prevent the illegal export of sensitive U.S. munitions and strategic technology. The Cornerstone program in the Financial and Trade Division is a similar outreach program to the financial industry. Operation Community Shield is a national law enforcement initiative that is designed to bring all of ICE’s immigration and customs-related law enforcement powers to bear in the fight against violent street gangs. The Investigative Services Division provides direct forensic, undercover, and other operational support to OI investigations carried out by the three core divisions, and the Mission Support Division provides policy guidance and services to facilitate executive oversight. Oversees programs designed to identify, with the financial community to identify disrupt, and dismantle organizations and eliminate potential vulnerabilities in the nation’s financial infrastructure. The headquarters and field organizational structures adopted by OI reflect the legacy functions of the customs and immigration services—e.g. drug investigations, human smuggling, and commercial fraud—and include activities to prevent terrorism within this structure. In April 2005, ICE completed an interim strategic plan that established as its mission to prevent terrorist attacks within the United States and reduce the vulnerability of the United States to terrorism while ensuring all of its mandated trade, immigration, and federal protective functions are not diminished. According to ICE officials, the national security objectives are not accomplished through any particular type or category of investigation. Instead, these objectives are addressed by examining investigations on a case-by-case basis and determining the relationship of any single case to national security. For example, although OI has the authority to investigate any employer that might have violated laws that regulate alien employment eligibility, OI instructs investigators to focus on employers at critical infrastructure sites. When ICE was created, it retained responsibility for enforcing the customs and immigration laws that were the purview of its legacy agencies. These include criminal statutes addressing the illegal import and export of drugs, weapons, child pornography, stolen antiquities, and other contraband, as well as alien smuggling, human trafficking, and the international laundering and smuggling of criminal proceeds. OI also is responsible for legacy customs enforcement of certain intellectual property and trade- related commercial fraud statutes and legacy immigration enforcement of laws prohibiting document fraud, benefit fraud, illegal entry into the United States or violations of the terms and conditions of entry, and employment without authorization. OI’s field structure was created by merging the existing Customs and INS field offices located primarily in cities near major ports of entry. In addition, ICE relied on the strategic priorities of the legacy agencies to determine the composition and locations of SAC offices—for example, high-volume smuggling corridors, proximity to state and federal prisons, and significant money laundering infrastructure. There are some long-standing functions of the legacy agencies that OI continues to perform, which also drive some of the types of investigative activities that are conducted. For example, OI has continued the legacy Customs practice of responding to violations concerning seized drugs or merchandise or detained persons uncovered at ports of entry by Customs and Border Protection (CBP) inspectors. U.S. Customs had historically been involved with helping to implement the President’s National Drug Control Strategy. Consistent with this involvement, DHS now receives funding specifically to support activities related to the strategy. A senior OI official said OI will continue to be responsible for performing a significant level of drug investigations because there simply is no other agency available to conduct the large number of border-related drug investigations U.S. Customs has historically performed and that are now carried out by OI. Another carryover function that OI now performs is the legacy INS practice of identifying aliens incarcerated in prisons and jails that are eligible for removal from the United States. Between 10 and 15 percent of investigative hours were classified by OI as having a direct nexus to national security. Although there is no firm standard for how OI should distribute its investigative resources, ICE’s interim strategic goals and objectives place a strong emphasis on national security-related activities. According to OI, the majority of the national security-related investigative hours were charged in a few case categories related to munitions control, illegal exports, compliance enforcement of visa violations, and terrorism. Most of the investigative hours within those case types that consumed roughly half of OI resources—drugs, financial, and general alien—were rarely classified as having a direct nexus to national security. In its fiscal year 2007 budget justification, DHS requested funds to increase the level of resources dedicated to visa compliance enforcement by more than 40 percent through the addition of over 50 special agent and support staff dedicated to these types of investigations. Roughly half of OI investigative resources during fiscal year 2004 and the first half of fiscal year 2005 were used for cases related to drugs, financial crimes, and general alien violations. The resource use in the other case categories pertains to investigations of a variety of customs and immigration violations including commercial fraud, general smuggling, human smuggling and trafficking, identity fraud, document fraud, and worksite enforcement. None of the investigative categories that apply to these violations individually accounted for more than 8 percent of investigative resource use during the period under study. In most instances these other case categories accounted for 5 percent or less of resource use. Moreover, with regard to general alien investigations, the equivalent of about 400 OI investigators performed, as a central part of their daily duties, functions that are noninvestigative in nature (i.e., not consistent with the position description of a criminal investigator as defined by the Office of Personnel Management.) According to OI officials, some of these noninvestigative activities were formerly performed by legacy INS investigators and include identifying incarcerated criminal aliens who are eligible for removal, an ICE responsibility, and responding to state and local police agencies that have apprehended illegal aliens. According to ICE’s interim strategic plan, ICE plans to shift this duty to ICE’s Office of Detention and Removal Operations (DRO). A DRO official told us DRO planned to take over this role from OI incrementally by first assuming responsibility for this activity in several major metropolitan areas in 2005 and 2006. OI investigators also perform worksite enforcement, which according to the OI Deputy Assistant Director responsible for this function, includes activities that might be more economically performed by noninvestigatory staff. This function—verifying that employees at critical and noncritical worksites are eligible to work in the United States—was described by OI officials as a compliance function that is not clearly aligned with the criminal investigator job description. Since the late 1990s, the level of investigative resources legacy INS and then ICE dedicated to this function has decreased. Since the terrorist attacks of September 11, 2001, INS and ICE have concentrated worksite investigative resources at critical infrastructure facilities. In its fiscal year 2007 budget justification, DHS requested funds to support the addition of 206 positions—171 of which are special agents—to conduct worksite enforcement. If these resources are approved and used for worksite enforcement, this would increase OI’s worksite enforcement effort significantly compared to what was done in fiscal year 2005. The fiscal year 2006 Department of Homeland Security Appropriations Conference Report directs ICE to submit a plan for the expanded use of immigration enforcement agents to focus on civil and administrative violations, raising the possibility that additional noninvestigative duties may be shifted from OI investigators, making them available for criminal investigations. OI tries to ensure that its resources contribute to the prevention of the exploitation of systemic vulnerabilities in customs and immigration systems by making most investigative resource use decisions in OI’s major field offices, based on the judgment of the agents in charge, with priority on investigating national security-related cases that arise. Although we found no evidence that OI has failed to investigate any national security- related lead that came to its attention, applying a risk management approach to proactively determine what types of customs and immigration violations represent the greatest risks for exploitation by terrorists and other criminals could provide OI with greater assurance that it is focusing most intensely on preventing those violations with the greatest potential for harm while striking an appropriate balance among its various objectives. According to the Standards for Internal Control in the Federal Government, one of the foundational components of a good control environment is risk assessment—including the assessment of risks, estimation of their significance, the likelihoods of their occurrence, and decisions about how to respond to them. OI has taken some initial steps to introduce principles of risk management into its operations—for example, encouraging its field managers to think about violations in terms of vulnerabilities to the customs and immigration systems. In addition, OI classifies each investigation using the numeric designations 1, 2, and 3, with class 1 indicating the highest relative importance within that category of investigation. However, it has not conducted a comprehensive risk assessment of the customs and immigration systems to determine the greatest risks for exploitation or analyzed these data to provide information to evaluate alternative investigative strategies and allow OI to make risk-based resource allocation decisions. Such a system could provide OI with greater assurance that it is striking an appropriate balance among its various objectives while focusing most intensely on preventing those violations with the greatest potential for harm. Application of a risk management approach by OI involves a risk assessment that would provide information in three areas: (1) threat— what strategic intelligence and experience suggest about how customs and immigration systems might be exploited by terrorists and other criminals; (2) vulnerabilities—the ways that customs and immigration systems are open to exploitations and the kinds of protections that are built into these systems; and (3) consequence—the potential results of exploitation of these systems, including the most dire prospects. For example, ICE’s strategic goal to prevent the unlawful movement across U.S. borders of people, money, and materials, includes as one of its strategies giving highest priority to closing those vulnerabilities that pose the greatest threat to our national security. However, OI has not performed a risk assessment to determine which vulnerabilities pose the greatest threat so that it can direct resources to those investigations that best address these vulnerabilities. Figure 2 demonstrates how the risk assessment and investigator’s judgment would combine to inform case selection and resource allocation. ICE has begun to incorporate elements of risk management into its resource allocation decision making. OI has several ongoing programs within its National Security Division designed to identify and mitigate national security threats. One is Project Shield America, where special agents conduct outreach to the export industry to educate these businesses about U.S. export laws and to solicit their assistance in preventing the illegal foreign acquisition of their products. OI also uses the Threat Analysis Unit and Compliance Enforcement Unit within the National Security Division to screen nonimmigrant students, exchange students, and other visitors for the purpose of identifying potential national security threats. The value of risk management goes beyond these types of resource allocation, however. Specifically, a more comprehensive risk management approach would enable OI to better ensure that its resources are effectively and efficiently applied to its national security and other missions by giving it a foundation for determining how resources might be best distributed within and across investigation types, for example, (1) how to best allocate its resources among case categories (e.g., visa violations, drug smuggling, and financial crimes); (2) the appropriate level of investment in national-security related investigations; and (3) the appropriate mix of case classifications within each category (i.e., the three-level classification of cases based on relative importance). Effective risk management also requires outcome-based performance measures and goals. We found OI lacks outcome-based performance goals to monitor the full range of its efforts to prevent the systemic vulnerabilities that allow terrorists and other criminals to endanger the United States. Performance goals—consisting of a target (acceptable level of performance) and a measure (a means to assess the performance level)—are an essential management tool in managing programs for results. In addition, our Standards for Internal Control in the Federal Government and the Office of Management and Budget call for agencies to have performance measures and indicators that are linked to mission, goals, and objectives to allow for comparisons to be made among different sets of data (for example, desired performance against actual performance) so that corrective actions can be taken if necessary. Currently, OI relies primarily upon statistics related to investigative resource use—such as arrests, seizures, and convictions—to monitor performance. In fact, ICE reports only one output performance measure for OI on the DHS Performance and Accountability Report—the percentage of investigations that result in an enforcement action (e.g., an arrest, conviction, or fine). Measuring the percentage of investigations that result in enforcement action provides only an indirect indicator of success in preventing systemic vulnerabilities that allow terrorists and other criminals to endanger the United States. Among other things, it lacks the ability to reflect successes of OI’s programmatic activities that are designed to deter the exploitation of systemic vulnerabilities before a crime is committed—for example, a measure of the outcomes of actions taken to close or control identified vulnerabilities. Without outcome-based performance goals, it is difficult for OI to gauge the effectiveness of its operational activities and to use this information to assess what types of corrective actions might be required—such as changes to programs or work processes in order to better align activities with strategic objectives. Finally, OI does not have sufficient systems to help ensure ongoing monitoring and communication of vulnerabilities discovered during its investigations. These controls could enhance OI’s ability to take action to eliminate those vulnerabilities or to recommend mitigation practices to entities that control the applicable customs or immigration system. Standards for Internal Control in the Federal Government calls for agencies to establish monitoring and communication systems that assess the quality of performance over time and ensure that findings of deficiencies are corrected and result in improvements to the process. OI officials said they are trying to use Cornerstone—a program to identify and reduce systemic vulnerabilities in financial systems—as a model for creating such a feedback loop (see fig. 3). Cornerstone was created by ICE to encourage coordination with the financial industry. OI officials in headquarters and field offices conduct outreach to the private sector and partner with private industry as well as with state and other federal law enforcement and regulatory agencies. The private sector provides ICE with information regarding the vulnerabilities it has observed, and ICE uses this information to develop criminal investigations. ICE also disseminates information on vulnerabilities to financial sector stakeholders through the Cornerstone Report. When vulnerabilities are identified that cannot be addressed by the private sector alone, ICE officials told us that a joint law enforcement and regulatory approach is utilized to eliminate or minimize vulnerabilities. With the exception of the Cornerstone program for financial investigations, OI does not have a complete system in place to help ensure that information gained during the course of investigations feeds back into the operations of other DHS components, other federal agencies, state and local partners, and relevant private sector entities to proactively reduce the vulnerabilities that facilitate violations. OI has taken initial steps to apply parts of the Cornerstone approach to all its investigative areas. For example, Project Shield America uses the same outreach techniques to the export sector as Cornerstone does to the financial sector, without the emphasis on changing policies and practices to reduce identified vulnerabilities. However, OI officials told us that OI does not have a process to help ensure that action is taken to mitigate the risks from the vulnerability identified during the course of its investigations across all SACs. A systemwide process for capturing the information and ensuring that OI takes appropriate actions in response to information, extending beyond financial crimes, would better support its ability to reduce vulnerabilities in immigration and customs systems by allowing OI to monitor the progress of efforts to reduce vulnerabilities and the identification of those involved in these efforts. Such a process is especially important for OI, since so many of its operations are collaborative, and the vulnerabilities identified through its investigations may require legal or policy changes that are controlled by external stakeholders. Although OI, as the primary investigative agency of the Department of Homeland Security states that it places priority on national security, from a practical standpoint, it is focused on enforcing all laws and regulations governing the customs and immigration systems. Before the creation of the DHS, these efforts, carried out by legacy INS and U.S. Customs service had a limited relation to national security—and indeed even since becoming a part of DHS, cases considered to be directly related to national security have demanded a relatively small portion of OI’s resources. Particularly considering its wide-ranging mission, a more comprehensive risk management approach could provide OI with better information to evaluate its alternatives and balance its resource allocations most effectively across the broad array of violations it is responsible for investigating. Although OI has applied some of the principles of risk management to its operations, applying a comprehensive risk management approach would provide a stronger evidence-based foundation to help it ensure that its resource allocation best supports its ability to prevent those systematic vulnerabilities with the most potential to endanger the United States. Specifically, a more comprehensive risk management approach would enable OI to better ensure that its resources are effectively and efficiently applied to its national security and other missions by giving it a foundation for determining how resources might be best distributed within and across investigation types, for example, (1) how to best allocate its resources among case categories (e.g., visa violations, drug smuggling, and financial crimes), (2) the appropriate level of investment in national- security related investigations, and (3) the appropriate mix of case classifications within each category (i.e., the three-level classification of cases based on relative importance). Lacking OI-wide outcome-based performance goals to assess its ability to prevent the exploitation of systematic vulnerabilities in customs and immigration systems that allow terrorists and other criminals to endanger the United States makes it difficult for OI to evaluate the results of its efforts in light of that objective. In addition, this lack may promote a tendency for OI to stay in the functional mindset of its legacy agencies. In particular, using data like the number of arrests, fines, drug and other seizures, prosecutions, and convictions gives OI some ability to assess the outputs of its activities. However, relying primarily on this type of performance data may make it more difficult for OI to determine if it should alter its investigative focus because favorable outputs (e.g., high numbers of arrests) tend to reinforce the current focus whether or not it is helping accomplish the ICE mission. Without outcome-based performance goals that are tied to ICE’s mission and objectives, the agency will lack a sufficient basis for assessing the alignment of resources that might offer the greatest contribution to this broad mission. Developing measures that can meaningfully gauge performance related to an expansive deterrence mission like ICE’s is not an easy task. However, armed with information about the relative risk to the customs and immigration systems, OI could be in a better position to measure its performance and make resource use decisions based on the potential to mitigate the most crucial identified risks. Finally, a critical part of the ICE mission is to reduce the vulnerability of the United States to terrorism. OI’s Cornerstone program and efforts to extend this approach to other investigative areas are intended to reduce vulnerabilities by feeding lessons learned from criminal investigations back into the organization’s systems and practices. However, these efforts do not include sufficient monitoring and communication systems to ensure that information is systematically fed back and that it consistently results in corrective actions. A feedback process that includes processes and procedures (for example, clearly established lines of reporting and authority and documented protocols) to help ensure that vulnerabilities OI uncovers during its investigations will result in mitigation measures or in recommendations for such measures to entities responsible for the applicable system would enhance OI’s ability to reduce vulnerabilities in customs and immigration systems. To put OI in a better position to allocate its investigative resources in a manner that maximizes their contribution to the achievement of ICE’s mission, we recommended that the Secretary of Homeland Security direct the Assistant Secretary of ICE to take the following three actions: Conduct comprehensive threat, vulnerability, and consequence risk assessments of the customs and immigration systems to identify the types of violations with the highest probability of occurrence and most significant consequences in order to guide resource allocation for OI national programmatic activity and to expand the available information upon which SACs base their decisions to open new cases. On the basis of the results of the risk assessment, develop outcome- based performance goals (measures and targets) that reflect the contribution of various investigative activities to ICE’s mission and objectives and develop a reliable method for tracking national security- related activity and classification criteria for the case management system that express the contributions of each investigation. Develop an OI-wide system to monitor and communicate the more significant vulnerabilities in customs and immigration systems that are identified during the course of OI investigations. This process should include a method to mitigate the vulnerability internally or to ensure that the vulnerability and associated mitigation recommendations are communicated to external stakeholders with responsibility for the applicable system. In response to our first recommendation, DHS agreed risk management is a valuable tool to establish priorities in a multiple threat environment and said ICE intends to take a broader, component-wide approach to assessing risk. DHS agreed that the ICE Office of Investigations resource decisions should be based on priorities derived from a strategic-planning process in which directors and unit managers from all ICE OI program areas participate, including mission support. DHS said priorities set forth in the strategic plan should be reviewed annually, revised as necessary, and communicated to each SAC. While DHS agreed with our second recommendation, it said that ICE needs to maintain the flexibility to develop performance goals that reflect its mission and may not necessarily be measurable in an outcome-based manner. DHS said the Office of Management and Budget has acknowledged that for certain activities (e.g., law enforcement) “outcome- oriented” performance measures may be difficult to identify and performance may be tracked by using a variety of output as well as qualitative measures. DHS said each division within OI uses standard law enforcement statistics covering all of its program units that can be shared, understood, and compared over the years, including arrests, indictments and convictions, broken out by category. We agree that developing outcome-based performance measures for law enforcement activities can be difficult and that some output measures can be beneficial. However, we continue to believe that where possible OI should seek to develop outcome-based performance measures that would better demonstrate the value of its efforts. OI needs to allocate resources to the types of investigations that have the best chance of mitigating potential vulnerabilities in customs and immigration systems to terrorism. With regard to our third recommendation, DHS said that OI headquarters’ program managers regularly communicate with the SAC offices to obtain feedback on significant cases and identified vulnerabilities. This information is documented in reports that are transmitted two times a day to both the OI and DHS leadership. A weekly report also is prepared that summarizes the significant cases of the week. DHS said that OI has established designated liaisons to both U.S. Citizenship and Immigration Services and CBP and they communicate specific vulnerabilities and threats. While these efforts are useful, our recommendation envisions a more comprehensive strategy to identify and mitigate vulnerabilities in customs and immigration systems and processes. We are encouraged that OI intends to continue to expand such outreach and partnership efforts. In implementing our recommendation, we believe that OI should obtain and use feedback from all relevant governmental and nongovernmental organizations in its efforts to mitigate potential vulnerabilities. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Richard Stana at 202-512-8777. Other Key contributors to this statement were Michael Dino and Tony DeFrank. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Immigration and Customs Enforcement's (ICE) mission is to prevent terrorist attacks within the United States and reduce the vulnerability of the United States to terrorism while ensuring its mandated customs, immigration, and federal protective enforcement functions are not diminished. The ICE Office of Investigations (OI) supports that mission by investigating customs and immigration violations. This testimony addresses the following key questions that were answered in GAO-06-48SU , a restricted report issued with the same title: (1) What structure and activities has OI adopted to address its mission? (2) In fiscal year 2004 and the first half of fiscal year 2005, how did OI use its investigative resources to achieve its goals? (3) How does OI ensure that its resource use contributes to its ability to prevent the exploitation of systemic vulnerabilities in customs and immigration systems? OI's organizational structure and investigative activities reflect those of its legacy agencies--the U.S. Customs Service and the Immigration and Naturalization Service--and include activities to prevent terrorism. OI retained responsibility for enforcing customs and immigration laws, and its field structure was created by relying on the strategic priorities of its legacy agencies to determine the composition and locations of field offices. Senior OI officials said that OI seeks to accomplish its homeland security mission by focusing on cases that seem to have a connection to national security. Data from ICE's case management system indicate that its investigative activities generally relate to legacy missions, with about half of OI resources during fiscal year 2004 and the first half of 2005 used for cases related to drugs, financial crimes, and general alien investigations--investigations unlikely to contain a nexus to national security. Overall, between 10 and 15 percent of investigative resources were used for investigations considered to have a link to national security. OI's current method of tracking these cases captures data about the cases where a nexus to national security is assumed due to the nature of the violation, primarily investigations of munitions control, illegal exports, visa violations, and terrorism. Additionally, the equivalent of about 400 of its 5,600 special agents worked full time to identify incarcerated aliens who were eligible for removal from the United States, a function that does not require the skills and training of criminal investigators. ICE plans to free investigators for more appropriate duties by shifting these functions to other ICE units and to study whether other functions could be shifted to employees in a noninvestigatory job series. To make resource use decisions in pursuit of OI's goal to prevent the exploitation of systemic vulnerabilities in customs and immigration systems, OI primarily relies on the judgment of staff in its major field offices, in addition to national programs developed in headquarters that are implemented in multiple field offices. Although GAO found no evidence that OI has failed to investigate any national security-related lead that came to its attention, applying a risk management approach to determine what types of customs and immigration violations represent the greatest risks for exploitation by terrorists and other criminals could provide OI with greater assurance that it is focusing on preventing violations with the greatest potential for harm, while striking a balance among its various objectives. OI has taken some initial steps to introduce principles of risk management into its operations, but has not conducted a comprehensive risk assessment of the customs and immigration systems to determine the greatest risks for exploitation, nor has OI analyzed all relevant data to inform the evaluation of alternatives and allow risk-based resource allocation decisions. OI also lacks outcome-based performance goals that relate to its objective of preventing the exploitation of these systemic vulnerabilities. Finally, OI does not have sufficient systems to help ensure ongoing monitoring and communication of vulnerabilities discovered during its investigations.
|
The Congress passed PRWORA in 1996, making sweeping changes to national welfare policy. The act replaced the Aid to Families with Dependent Children (AFDC) program with TANF block grants, a fixed federal funding stream that provides states with a total of $16.5 billion per year over 5 years and allows states the flexibility to design their own programs and strategies for promoting work and self-sufficiency. Under TANF there are strong work requirements for recipients and there is a limit on the number of months (60) that families can receive federally funded TANF benefits. The number of families receiving cash assistance has declined dramatically in recent years. More than 5 million families received cash assistance in 1994 but, as the economy improved and TANF work enforcement gathered steam, fewer families received assistance. Caseloads have fallen dramatically since the act went into effect, from 4.4 million families in August 1996 to 2.2 million families in June 2000. Caseload declines slowed toward the end of 1999 and in a few states rose slightly, but the most recent data available from HHS indicate that, nationally, caseloads continue to decline. Although there are no supporting data, many assume that as caseloads have fallen the composition of the caseload has changed. Specifically, some have speculated that those TANF recipients who could easily find and keep jobs have left the rolls, and hard-to-employ recipients—those with characteristics that interfere with employment—comprise an increasing share of the remaining cash assistance recipients. As a result, there is some concern that state programs that may have been effective at moving easier- to-employ recipients into the workforce may not meet the needs of those remaining on the rolls. Under PRWORA, TANF recipients face stronger work requirements than welfare recipients faced in the past. Unless a state opts out, nonexempt adult recipients who are not working must participate in community service employment 2 months after they start receiving benefits. Adults are required to participate in work activities within 2 years after they start receiving assistance under the block grant. If recipients do not comply with the work activity requirement, states must impose sanctions that reduce cash assistance, and may opt to terminate the grant entirely. PRWORA allows states to exempt some TANF recipients from the work activity requirement while still receiving benefits. States may exempt parents with children under age 1 from work requirements, and may disregard them in calculating participation rates. States may not penalize parents with children under 6 for not working if child care is not available. Recipients who are exempted by the state from work requirements are still subject to the federal 60-month limit on receipt of benefits and may be included in a 20-percent hardship exemption from the time limits. All states allow recipients to meet their work activity requirement through paid employment. Many states have an earned income disregard that allows recipients to earn some amount without losing cash assistance, and many states raised the amount of this disregard after implementation of TANF to give TANF recipients additional incentives to work. In states with a higher earnings disregard, more employed persons may still be receiving TANF benefits—and thus remain part of the caseload—than in states with lower earnings disregards, where working recipients with lower earnings levels are removed from the rolls. To avoid financial penalties, states must ensure that at least a specified minimum percentage of adult recipients are participating in work or work- related activities each year that count for federal participation rate purposes. These types of activities are not necessarily identical to the activities the state allows recipients to engage in to maintain eligibility for benefits. As specified in PRWORA, work activities that count for federal participation rate purposes include employment (subsidized or unsubsidized), work experience programs, on-the-job training and community service. Providing child care for TANF recipients engaged in work or work-preparation activities also counts for federal participation rate purposes. Job search is considered a work activity for a recipient for up to 6 weeks (or 12 weeks in high-unemployment areas), as are certain types of education and training that are directly related to employment and attendance at secondary school or in a course of study leading to a certificate of general equivalence in the case of a recipient who has not completed secondary school or received such a certificate. The required number of hours of work participation for TANF recipients and the percentage of a state’s caseload that must participate to meet mandated participation rate requirements were designed to increase over time (see table 1). Under PRWORA, families with an adult TANF recipient cannot receive federally funded assistance for more than 60 months. However, the law allows states to impose a shorter time limit or to provide assistance beyond 60 months using state funds. The law also allows states to exempt families with adult TANF recipients from the 60-month limit on federal assistance based on hardship or if the family includes a victim of domestic violence. If a state determines that a family meets one of these criteria, it can opt to continue the family’s federally funded cash assistance beyond 60 months. However, a state can exempt no more than 20 percent of its TANF caseload at any point in time from the federal 60-month limit. Federal laws and regulations give states flexibility in determining the types of programs they offer to TANF recipients and how TANF block grant and state funds can be used. States also have the flexibility to set a wide range of program rules such as eligibility criteria, earned income disregard amounts, and benefit levels. Most programmatic decisions are also left to states. For example, states can choose to administer the TANF program themselves or devolve responsibility for many of the program management decisions to county or local TANF offices. States can deliver employment- related services to TANF recipients through their workforce development or employment service agencies or through their TANF agencies. With few restrictions, states can also determine the typical pathway for most new applicants. For example, in designing their TANF programs, states have the flexibility to determine whether new applicants initially will be referred to education, training, and work-preparation activities, or whether they first will be encouraged to enter the workforce as quickly as possible. Many states have chosen the latter approach, often referred to as Work First, in part because of the emphasis PRWORA places on work. Under a Work First approach, recipients are encouraged to find jobs as quickly as possible and most recipients are placed in job search—an activity that involves looking for a job—rather than in education or other work preparation activities. Only recipients who fail to find jobs, or who find jobs only to lose them immediately, are referred to education, training, or other types of activities. States also have a great deal of flexibility in determining exactly what types of education, training, or other work-preparation activities as well as what types of support services they provide with their TANF funds. In addition to TANF funds, states can also access U.S. Department of Labor Welfare-to- Work (WtW) grants to operate community service or work experience programs, create jobs through wage subsidies, provide on-the-job training, or provide employment placement and job readiness services. WtW funds may also be used for a wide range of support services, including education and training, child care, short-term housing, and transportation assistance for those placed in work activities. Although PRWORA's emphasis on devolution of program authority to states and other levels of government calls for state and local governments to take responsibility for program results and outcomes, PRWORA gives the federal government some program oversight responsibilities. Under the law, HHS is responsible for administering TANF funding, setting reporting requirements for states, and reviewing state TANF plans. HHS is also responsible for conducting research on the benefits and effects of the TANF program, and receives funding for welfare reform and social services research and evaluation studies. In keeping with this, HHS' stated goals for research supported with these funds are to gain knowledge about, and thereby improve, welfare policy and practice and to ensure that knowledge gained is widely disseminated in formats accessible to policymakers and program administrators at all levels. Section 413 of the Social Security Act specifically directs HHS to develop “. . . innovative methods of disseminating information on any research, evaluations, . . . including the facilitation of the sharing of information and best practices among States and localities through the use of computers and other technologies.” HHS also has created several mechanisms to educate states about the broad latitude granted them by PRWORA and its implementing regulations for assisting TANF recipients. HHS can issue a variety of publications and communiqués and sponsor national and regional conferences that can include discussions of the law and regulations, possible sources and uses of state and federal funds, and creative state strategies. The Welfare Peer Technical Assistance Network, funded by HHS, was created to provide a variety of services and products to states, counties, localities, and community-based organizations that work with TANF recipients. Even though an increasing percentage of TANF recipients nationwide are combining welfare and work, most recipients are not engaged in work or work activities as defined by PRWORA. At least in part, this may be because many current recipients have characteristics that make it difficult for them to work, according to data from national surveys and several studies, as well as from officials in the six states we visited. Characteristics that impede employment are fairly common among TANF recipients, although there are few data available to assess whether recipients with such characteristics represent a greater share of the caseload than previously. To avoid financial penalties, states must ensure each year that at least a specified minimum percentage of adults receiving TANF are participating in work or work-related activities that count for federal participation rate purposes. So far, all states have been able to meet the required minimum work participation rate for their TANF caseload in general, in part because states receive credit for caseload declines, which reduces the required participation rate they must achieve. In fiscal year 1999, 42 percent of all TANF recipients were engaged in unsubsidized employment or participated for at least some hours in other work activities, such as job search, that count for federal participation rate purposes. However, not all participated for enough hours to have that activity count toward their state's participation rate. The proportion of TANF recipients nationwide who were engaged in unsubsidized employment increased during the past few years. According to our analysis of HHS data, the percentage of recipients who were engaged in unsubsidized employment increased from 17 percent in fiscal year 1997 to 25 percent (or 400,000 recipients) in fiscal year 1999. In each of the states that provided us with data on their caseload characteristics, fewer than half of the recipients who received TANF assistance during March 2000 were employed at that time. The percentage of the caseload that was employed in each of these states in March 2000 ranged from just under 40 percent in Connecticut, Washington, and Michigan; to 13 to 15 percent in Florida, Wisconsin, Oregon, and New York; to 6 percent in Maryland. In the last few years, the percentage of TANF recipients participating in work activities other than unsubsidized employment that count for federal participation rate purposes has been quite small. Between fiscal years 1997 and 1999, no more than 5 percent of the caseload each year participated in any one of the following activities: subsidized work experience, job search, or education-related activities. Studies have shown that having certain characteristics, such as poor health or disability, no high school diploma, limited work experience, exposure to domestic violence, substance abuse, and limited English proficiency, makes finding and keeping a job more difficult. Based on data from its 1997 NSAF, the Urban Institute concluded that the greater the number of these characteristics a TANF recipient has, the less likely that recipient is to be engaged in work or work activities. The survey found that 88 percent of recipients who had none of these characteristics were working or engaging in work-related activities, compared to 59 percent of recipients with one of these characteristics and 27 percent of recipients with three or more (see fig. 1). Officials in all six of the states we visited agreed that recipients with one or more work-impeding characteristics find it hardest to successfully enter the workforce—and are often referred to as hard-to- employ recipients. However, states have found that while having these characteristics makes employment difficult, it does not make employment impossible. Some recipients who have characteristics that make it difficult to work do, in fact, find jobs. Studies have found that a considerable percentage of TANF recipients have characteristics that make it difficult for them to work. Table 2 identifies the range of estimates a number of studies provide on the prevalence of some of these characteristics in the welfare population. For example, estimates from the studies reviewed of the proportion of the welfare caseload with health problems or disability range from 20 to 40 percent, and the proportion of the caseload with no high school diploma from 30 to 45 percent. Information from the states we visited is consistent with the studies’ data. Officials in the states we visited indicated that many recipients have poor mental or physical health, have substance abuse problems, or were victims of domestic violence. Some officials noted that the actual extent of these characteristics can be hard to determine because most states and localities rely on recipients to disclose this information about themselves to their case managers, which they are often reluctant to do. State and local officials in the six states we visited shared the opinion that a larger proportion of the current caseload has difficulty obtaining employment in comparison to past caseloads, but none had data to demonstrate this. The small amount of data available to compare TANF caseloads over time does not show statistically significant changes in the characteristics of welfare recipients since welfare reform; however, these data do not include measures of many of the characteristics that impede employment. Officials in the states we visited reported that while caseloads fell rapidly between 1995 and 1999, the declines slowed between 1999 and 2000; they attributed this slowing to the changing composition of the caseload. They reported that new recipients who are the most job-ready leave the welfare rolls relatively quickly, leaving behind recipients who have greater difficulty obtaining and retaining jobs. Some officials not only believed that the caseload had become harder to employ, but also speculated that hard-to-employ TANF recipients are more visible now because, in the past, there was no incentive to determine whether recipients had characteristics that made them difficult to employ. Among the states we visited, only Washington had collected statewide data to determine whether its caseload had become harder to employ, and they are of limited use. The state measured changes between August 1997 and February 2000 in the percentages of recipients who had each of four characteristics it found to limit a recipient's employment: less than a high school education, limited English-speaking ability, a young child, and no recent work experience. While Washington found that the percentage of the caseload with the first three of these characteristics increased slightly over this period, the proportion of the caseload with no recent work experience and with two or more of these characteristics decreased. Data on key characteristics such as poor health, mental illness, substance abuse, exposure to domestic violence, poor basic work skills, or other characteristics that make it difficult to work were not available, so results from this study are limited. There are no national data available to track changes in many of the characteristics found to impede employment over time among welfare recipients. The CPS does measure characteristics such as age, race, marital status, citizenship status, school attendance, and educational achievement of welfare recipients nationwide, as well as their receipt of disability payments, but our analyses of these data show no significant changes since welfare reform. (See app. II for a list of these studies.) Every state we visited implemented a Work First approach that emphasizes employment over training and education to help TANF recipients obtain jobs; however, to varying degrees, all states have modified or enhanced their approach to better serve recipients for whom the Work First approach is not successful because they have characteristics that may impede employment. The states we visited differ markedly in their approach to identifying recipients who have these characteristics so that they can either be exempted from work requirements or provided with targeted programs and services that would help them obtain employment. Some states and localities require TANF recipients to look for a job and only offer enhanced services to those who are unsuccessful, while others begin by screening and assessing new applicants to identify those who have characteristics that might impede their ability to get a job. The strategies states use to assist those recipients identified as hard-to-employ also vary. Some of the states we visited have focused their efforts on improving and expanding case management, while others have programs and services targeted specifically to prepare hard-to-employ recipients for work. All six of the states we visited also refer recipients to programs run by non-TANF agencies and organizations that help recipients deal with specific problems such as substance abuse and mental illness that may affect their ability to get and keep a job. Every state we visited implemented a TANF program that can be characterized as Work First and, as a result, their TANF programs share a few common elements. All of the programs seek to move people from welfare into unsubsidized jobs as quickly as possible. Officials expressed the belief that the best way to succeed in the labor market is to join it, and the best setting in which to develop successful work habits and skills is on the job. PRWORA requires that all TANF recipients determined ready to work by the state either work or participate in a work activity as a condition for receiving benefits. Most recipients are referred directly to job search—an activity that involves looking for a job. Employment is presented as the goal of the TANF program for all recipients, and job search, rather than activities such as education and training, is considered the most expedient strategy for helping recipients, in general, obtain jobs. As their experience working with hard-to-employ recipients has grown, however, the states and localities we visited have concluded that the Work First approach is not effective in helping certain types of TANF recipients get and retain jobs. States have found that recipients who have certain characteristics that interfere with employment need more time and additional supports and services to adequately prepare for work. In response, all of the states we visited said that they had made changes to their TANF programs, but they differed markedly in the ways in which they had modified or enhanced their programs to better accommodate the needs of hard-to-employ recipients. One of the key issues confronted by states and localities we visited in implementing strategies to meet the needs of hard-to-employ recipients is identifying this population. The states and localities we visited use two distinct approaches. Some, such as Michigan and Butte County, California, require most recipients to look for employment, or “test the job market,” immediately after applying for assistance, and do only minimal initial screening to determine whether applicants should be exempt from work requirements. Only recipients who are unsuccessful in finding a job within a certain time period are reassessed and either exempted or referred to targeted services and programs to address work-impeding characteristics. Other states, including Connecticut and Maryland, reported that they screen and assess all new applicants. Based on this screening and assessment, some recipients are exempted from work requirements. Those who are not exempt but who have characteristics that impede employment can be referred for targeted services to prepare them for work. According to officials in the states we visited, this latter group represents a sizable share of their total TANF caseload. No one approach has been proven more effective than another for moving recipients who have characteristics that might impede employment into jobs. Some of the states and localities we visited rely primarily on the job market to identify recipients who have characteristics that impede employment. Officials in these states pointed out that many such recipients nevertheless have successfully found and kept jobs through the usual Work First process. They stated that all recipients should have the opportunity to test the labor market before being referred for additional services. According to officials in these states, this approach has two distinct advantages: first, by allowing recipients to test the job market, the state does not prejudge or label recipients as hard-to-employ when they may, in fact, be able to obtain jobs; and second, this strategy sends a clear message that TANF is temporary and that employment is the immediate goal. The states and localities we visited that use the job market to identify hard- to-employ recipients do some very limited up-front screening to identify recipients who clearly meet the exemption criteria, but send most new applicants directly to job search after they apply for TANF assistance. Case managers in Michigan, for example, ask a few questions during the application process to identify recipients who are clearly exempt from work requirements, such as those who are victims of domestic violence, but do not use a standardized questionnaire to screen new applicants. Recipients are not further assessed or referred to job preparation activities other than job search unless they fail to find a job within 30 days. Similarly, in Butte County, California, recipients must look for a job for 4 weeks before they can be referred to job preparation activities beyond those that are part of the usual Work First process in that county. Some of the states and localities we visited have developed strategies designed to identify hard-to-employ recipients soon after they apply for benefits so that they can be referred to appropriate programs before they attempt to find jobs. Officials from states and localities that use this approach argue that by identifying these recipients early, agencies can more appropriately focus resources and time on activities and services that hard-to-employ recipients need in order to become employed. Strategies ranged from conducting an in-depth assessment of every new applicant to developing a series of increasingly detailed assessments for recipients who cannot find employment quickly. Some of these programs are described below: Each local TANF agency in Maryland has developed an assessment instrument to identify hard-to-employ recipients soon after they apply for benefits. In Frederick County, a team of three trained clinicians— including a case manager, a child support enforcement worker, and a social worker—conduct a thorough assessment of each recipient soon after she submits an application. The Washington TANF agency has developed a computerized questionnaire for screening all new applicants. Applicants are asked a series of general questions on more than a dozen topics ranging from child care and transportation to legal issues, domestic violence, and substance abuse. A response that indicates a possible problem in any area prompts the caseworker to ask a series of more detailed questions on that topic. Recipients found to be hard-to-employ are referred directly to programs and services that address their special needs or, if they are found to have what the state deems a major issue—such as family violence, some specific health problems, homelessness, substance abuse, or pregnancy—they are referred to an on-site social worker for a more in-depth assessment and referral to appropriate services. All of the states we visited have changed the way they manage TANF cases in order to better help recipients obtain and keep jobs. Unlike typical case management prior to welfare reform, which consisted primarily of determining eligibility, case management under TANF is an ongoing and multifaceted process. Staff interact with recipients, determine needs, establish goals, address characteristics that impede employment, and monitor compliance with program requirements. The states we visited took various approaches and emphases in structuring their case management process, not only in response to the changes in welfare policy and goals, but also to meet the needs of hard-to-employ TANF recipients. Some examples follow: Washington has implemented a statewide case management process called case staffing. Case staffing consists of holding periodic meetings that involve every member of the staff who has any interaction with a recipient, including representatives from contractors and other agencies that provide services to TANF recipients. Staff members consider each recipient's history, current activities, and employment plan, and whether past activities have yielded desired results. The group then makes recommendations as to how the case manager should proceed with the recipient, such as referrals to other programs or alternative job- preparation activities. In addition to its general case management services, Connecticut uses intensive case management as a primary strategy for moving hard-to- employ recipients into the workforce. The state also provides intensive case management to recipients who are at risk of having a sanction imposed for not complying with program rules, or who have reached their lifetime limit on the TANF program. Intensive case management services can include employee assistance programs for employers of TANF recipients. In Grand Rapids, Michigan, the local TANF agency has stationed two case managers at a large company that employs TANF recipients to help hard-to-employ recipients retain their jobs. These on-site case managers serve as a resource for both employees and the employer, helping employees cope with crises that might otherwise cause them to lose their jobs, and intervening on behalf of the employer at the first sign of trouble. The company's retention rate for current and former TANF recipients was 81 percent, as compared to only 33 percent for their non- TANF employees. Company officials directly attributed the higher retention rates to on-site case management and cooperation from the local TANF agency. The states we visited have faced challenges in altering their case management to meet the needs of hard-to-employ recipients. Officials and advocates for welfare recipients in Maryland told us that case managers, who for years served primarily as eligibility specialists, have had difficulty changing their focus to employment preparation and planning. This is especially true for those who continue to have responsibility for determining eligibility for food stamps and other means-tested programs. The difficulties case managers have encountered in learning new responsibilities have been exacerbated in areas where caseload reductions have been accompanied by disproportionate reductions in staff. Recipient- to-case-manager ratios vary by state and locality. For example, in one locality in Florida, case managers had responsibility for as many as 400 cases, while case managers in one Maryland county had responsibility for an average of 15 cases. State officials reported that in areas where case managers have large caseloads, they do not have the time to provide hard- to-employ recipients with the extensive monitoring and referrals to additional programs and services they need. In addition to assessment and case management, several of the states and localities we visited have developed programs targeted specifically to prepare hard-to-employ recipients to enter the workforce. States that have developed targeted programs have emphasized short-term interventions with a strong employment focus that often involve several different activities to address specific problems a recipient may face. Some of these targeted programs are designed to provide hard-to-employ recipients with the skills they need to cope with crises and succeed at the workplace on a day-to-day basis. Other targeted programs offer recipients hands-on work experience in a structured, highly supervised, supportive environment. Some of the most innovative programs developed by TANF agencies in the states we visited were designed to help hard-to-employ recipients learn to cope with the multiple characteristics that make employment difficult. State officials reported that in many cases, the difficulties hard-to-employ recipients have in finding and keeping jobs stem from their inability to deal with these characteristics on an ongoing basis, rather than simply having the characteristics themselves. Officials in every state we visited told us that a recipient with low educational attainment can often obtain an unskilled job, particularly in a strong economy, but if she lacks the skills required to deal with situations such as a chronic health problem, a child with behavior problems, or a breakdown in child care or transportation arrangements, she is likely to miss work and lose the job. Similarly, one official stated that a recipient may have the skills and abilities needed to function in a particular job, but if she does not know how to dress appropriately, conduct herself in an interview, or fill out a job application properly, she will not be able to get that job. In the six states we visited we encountered several examples of programs designed to provide recipients with the life skills needed to be successful in the workforce, often referred to as soft-skills training: In Baltimore County, Maryland, a private firm called Workforce Solutions is under contract to run a 12-week program for TANF recipients designed to address specific characteristics that may impede employment—such as low basic reading and math skills—while also providing participants with ways to cope with personality issues such as antisocial behavior and inappropriate responses to authority. In Broward County, Florida, a soft-skills program for long-term TANF recipients uses role-playing, simulated work activities, team-building exercises, and other techniques to help recipients change their attitude and feelings about work. The program focuses on helping recipients develop positive attitudes, coping skills, critical thinking, workplace ethics, and confidence. In Butte County, California, the TANF agency has coordinated with a nonprofit organization that manages a boutique that provides makeovers and professional clothing for interviewing and working. Recipients are also provided with necessities such as hosiery, undergarments, and shoes at no cost. A few of the states and localities we visited relied heavily on work experience activities to prepare hard-to-employ recipients to join the workforce. Intensive, highly supervised, hands-on work experience allows recipients to experience a workplace environment first hand. However, most localities we visited rarely, if ever, referred recipients to these types of programs, in part because they felt that such programs were unnecessary in a strong economy with ample job opportunities. While a few of the work experience programs in the states we visited were very specialized and targeted to certain subgroups of hard-to-employ recipients such as immigrants with limited English skills, most served a broader range of hard-to-employ recipients. TANF worksite activities differ in who is required to participate and in how much the activities mirror unsubsidized employment. Some examples of work experience programs follow: Community Rehabilitation Industries offers a targeted work-experience program to refugees in Los Angeles, California. The program is designed to provide hands-on experience at a manufacturing business and a range of supplemental services, including English as a Second Language (ESL) classes, sobriety support, parenting skills instruction, and basic skills remediation. These programs are provided at the work-site and participants are paid to attend. Washington has a statewide internship and training program, called Community Jobs (CJ), that provides hard-to-employ recipients who have various work-impeding characteristics with 20 hours of paid work experience per week for up to 9 months. The program provides intensive case management to help participants cope with the demands of their work experience positions. The program also requires participants to enroll in a complementary work activity, such as basic education or substance abuse or mental health treatment, for an additional 20 hours per week. Program officials report that the number of CJ slots is insufficient to meet demand, primarily because the program does not have the funding to support more slots. All of the states we visited relied on non-TANF agencies and organizations to provide certain services, including those that address the characteristics that, according to state officials, impeded employment for recipients. Officials in every state we visited cited domestic violence, substance abuse, mental health problems, limited education, poor health, and having a disability as factors that affected a recipient's ability to get and keep a job. Most states relied primarily on non-TANF government agencies and other community organizations with expertise and experience in providing services to address these problems rather than developing new programs specifically for TANF recipients. States used TANF as well as a variety of other funding sources to pay for these services, including funding from other federal, state, and private programs. Some examples include the following: Domestic violence: The policy in every state we visited is to refer victims of domestic violence to emergency housing and other services, including counseling and legal assistance, although in some locations officials reported that the availability of such services was limited. In most cases, these services are provided by nonprofit organizations, public health agencies, and privately funded legal assistance programs. Low educational attainment/limited English proficiency: All of the states we visited had policies to refer recipients to adult basic education, ESL courses, and high school equivalency programs offered by local school districts, community adult education programs, colleges and universities, or nonprofit organizations. Poor health/disability: Most of the states we visited had policies to refer recipients with physical or mental impairments that might make them eligible for the Supplemental Security Income (SSI) program, for assistance in applying for SSI benefits. In Maryland, the TANF agency contracts with an agency to provide services through their Disability Entitlement Advocacy Program to help disabled recipients of cash assistance apply for, qualify for and receive SSI. In addition, all states referred disabled recipients to vocational rehabilitation services. In the states we visited, decisions about collecting and analyzing data on caseload characteristics, setting time limits shorter than 60 months, and limiting work and work-preparation activities to those that count toward federal work participation requirements have increased the challenge of moving hard-to-employ TANF recipients into the workforce. Few states compile and analyze data on those characteristics that make recipients hard to employ, making it difficult for them to estimate the number of recipients likely to reach their 60-month limit on federal benefits, and to develop strategies to help them become employed before their federally funded benefits run out. In addition, as a group, hard-to-employ recipients may need more time to be integrated into the workforce; time limits shorter than 60 months may decrease the likelihood that they will acquire the skills they need for work before their assistance ends. Finally, some states tend to place recipients only in work and work-preparation activities that count when calculating federal work participation rates. This may prevent hard- to-employ recipients from participating in other activities that may better prepare them for employment. Because states have not compiled statewide data on the characteristics of their TANF recipients and do not have the systems to analyze data that the localities collect, the states face challenges in developing programs tailored to the needs of their hard-to-employ recipients and assuring that all hard-to- employ recipients have access to such programs. Although ACF has annually sponsored conferences on evaluating welfare reform since 1998, at the time of our site visits, none of the six states we visited had sufficient data to allow them to identify the share of their caseload that is hard-to- employ. Of the nine states we asked for data on the number of adult TANF recipients with learning disabilities, substance abuse issues, exposure to domestic violence, other mental or psychological conditions, limited English proficiency, physical impairment, poor physical health, developmental disabilities, and criminal history—characteristics that impede employment—only two provided statewide data on some of these characteristics. Most states reported that these data were not available. States could not provide these data, in part, because they are not required to collect them, and because identification of such characteristics is inherently difficult. In addition, widely accepted screening and assessment tools that can be used to determine the presence of those characteristics are lacking. State officials also indicated that they lack the computer systems necessary to aggregate and analyze such data for managing their caseloads as a whole. PRWORA requires states to collect and report specific caseload information such as marital status, citizenship, and ages of family members to HHS on a quarterly basis, but states also have the flexibility to collect any additional data they deem necessary to manage their TANF programs. Only two of the states we visited—Connecticut and Washington—had begun to use this flexibility to systematically collect statewide data that would enable them to identify recipients with characteristics that make them hard-to-employ. As we noted earlier, Washington and Connecticut each developed a standardized screening instrument to identify applicants with characteristics—such as low literacy, limited English proficiency, and learning disabilities, as well as mental health, substance abuse, domestic violence, and physical health problems—that may interfere with employment. However, they have been collecting these data only since July and August 2000, respectively. Other states do some screening of applicants, but screening tools may vary by locality within the states, as in California, Florida, and Maryland; or may consist of just a few basic questions on the TANF application, as in Michigan. Even where states identified hard-to-employ recipients, they did not necessarily have this information in databases that allowed states and localities to use it to manage their caseloads. Some of the reasons states gave for not identifying applicants with characteristics that could interfere with employment included the lack of widely accepted screening and assessment tools, difficulties in eliciting information from recipients, and a lack of skills among case managers in conducting screenings and assessments. The few states and localities that had questionnaires available to screen all new applicants, including Washington, Connecticut, and counties in Maryland and California, had developed the screening tools themselves, in part because they were unable to identify existing screening tools. Even among those states and localities that used screening and assessment tools, however, case managers faced a myriad of obstacles in identifying recipients who have characteristics that impede employment. For example, state and local officials reported that recipients are sometimes reluctant to reveal certain characteristics, such as substance abuse or domestic violence, because they fear losing custody of their children or triggering reprisals from the abuser. Cultural issues and privacy concerns can also prevent recipients from sharing information with case managers. Also, some recipients may be unaware that they have specific problems or may be in denial regarding the problems. Furthermore, some case managers may hesitate to ask about recipients' physical limitations for fear of violating the Americans with Disabilities Act. Case managers may also have concerns about privacy and confidentiality and about the impact of labeling recipients as hard-to- employ before they have had a chance to prove themselves as job-ready. Such difficulties are especially challenging for case managers not trained in screening and assessing TANF recipients. Few of the states we visited provided training to case managers in how to use screening tools or identify recipients who have characteristics that could impede employment. HHS and other federal agencies have initiatives under way to either develop tools or evaluate available screening and assessment tools. ACF at HHS contracted with the Urban Institute to conduct a study of existing screening and assessment tools. The final report for this study is scheduled to be completed in 2001, but a preliminary report confirms that there are few existing screening tools for TANF programs to draw upon. HHS is also collaborating with the Departments of Labor and Education to develop a publication on screening tools to identify persons with disabilities. HHS also funded publication of guides, published in July 2000, on screening TANF recipients for mental illness and substance abuse. However, HHS has done little so far to encourage state efforts to compile data that identify hard-to-employ recipients, or can be used to estimate the number of TANF recipients who will reach their 60-month limit before joining the workforce. Officials in the states we visited reported that states and localities not only lack data but also lack computer systems sophisticated enough to process statewide data on caseload characteristics. These findings are consistent with our recent study on TANF information systems, which reported that 10 of 15 localities surveyed indicated that their automated systems provide about half or less of the information needed for case management. Eight of 15 states and only 6 of 15 localities reported that their systems provided all or most of the data needed for service planning. The lack of comprehensive caseload data and adequate computer systems has made it difficult for states to make plans for recipients likely to reach their time limits. Not all of the states we visited have conducted analyses to determine how many would likely reach their time limits, the characteristics of those most likely to reach their time limits, or the impact of various strategies to deal with those who have reached their time limits. HHS is doing little to encourage states to collect and analyze data on the characteristics of TANF recipients likely to reach their time limits. PRWORA provides states with several strategies to deal with recipients who reach their time limits. States are able to exempt recipients from the 60-month limit on federal assistance in cases that involve hardship or domestic violence. It is up to each state to define hardship, and states can extend this exemption to no more than 20 percent of their TANF caseload. All the states we visited plan to exercise this option, but none had decided what criteria (other than domestic violence) they would use to determine who would qualify for this exemption. However, based on their analysis of statewide data, both Washington and Maryland have concluded that, at some point, this exemption will not allow them to extend benefits to all qualified recipients who reach their 60-month limit. This is because, as the caseload continues to fall, the number the state can provide with extended benefits (20 percent of the caseload) will also fall. Meanwhile, each month more recipients will reach their time limits. Analyses conducted in Washington and Maryland indicate that, at some point, even families facing severe hardship will have their federal benefits terminated because they cannot be served under the current 20- percent hardship exemption policy. Officials in all of the states we visited told us that they had not yet estimated the cost of other strategies, such as continuing benefits to all recipients who reach their time limits using state funds, or providing federal assistance to such families through subsidized employment programs. Inadequate data management information systems have also made it difficult for states to develop mechanisms for holding local public and private service providers accountable for delivering needed services to hard-to-employ recipients. Advocates for welfare recipients in five of the states we visited noted that hard-to-employ recipients with similar characteristics could receive vastly different services depending on their case manager, local TANF office, or program provider. Advocates in Maryland and Florida reported that recipients in some localities have access to a wide range of programs and services tailored to meet their needs, whereas such specialized programs are not provided in other localities. For example, Baltimore County, Maryland, contracts with a number of service providers to deliver specialized work preparation programs, while recipients in the city of Baltimore have little access to such programs. In Maryland, as in other states, few mechanisms are in place to ensure that recipients throughout the state are treated uniformly or to ensure that recipients receive the assistance they need to get jobs. Officials in all six of the states we visited said that they expect at least some of their hard-to-employ recipients to reach their time limits before they are able to develop the skills to enter sustained employment, particularly those who have multiple characteristics that make employment difficult. States that have adopted time limits of shorter than 60 months have already seen some recipients reach their time limits without being prepared for long- term employment. Connecticut, the state with the shortest time limit of the six states we visited—21 months—has found that hard-to-employ recipients cannot address all of the characteristics that impede employment by the time their benefits are terminated. The state has developed an alternative safety net program in order to provide services and support to recipients who are not employed when they reach the time limit. In addition, a large share of those who are still receiving TANF benefits at 21 months have their benefits extended. In April 2000, 39 percent of Connecticut's caseload had had their time limits extended. At least one state, Florida, has adjusted its time limits to account for the challenges faced by hard-to-employ recipients. In that state, most recipients face a time limit of 24 months (out of any given 60-month period), but recipients who lack a high school diploma or have significant skill deficits have a time limit of 36 months (out of any 72-month period). Florida has a lifetime limit on benefits of 48 months, but will allow an additional 12 months of benefits in cases of hardship. Although the types of work activities that count when calculating states' work participation rates are specified in PRWORA, states have the flexibility to define allowable activities for the purpose of determining continued eligibility for TANF assistance. Consequently, the activities that count for federal participation rate purposes and those that allow TANF recipients to maintain their eligibility for assistance need not be the same. While some states we visited allow TANF recipients to maintain their eligibility for assistance by participating in activities that do not count toward the participation rate, others allow only the work activities that count. Michigan and Florida, for example, limit the work activities that enable TANF recipients to continue to receive assistance strictly to those that count toward meeting work participation rate requirements. In these states, only TANF recipients who were exempted from participating in a work activity were referred to programs that are not considered work activities in calculating the federal participation rate. In contrast, other states we visited allow hard-to-employ recipients to maintain their eligibility for benefits by participating in programs and services that do not count for federal work participation rate purposes, such as soft-skills training and substance abuse treatment. However, officials in these states expressed concern that they might have difficulty continuing to meet work participation rate targets if their caseloads consist more predominantly of hard-to-employ recipients and as caseload reduction slows. In California and Maryland, for example, some counties offered soft-skills training to help TANF recipients obtain the basic time management, budgeting, and social skills necessary to maintain steady employment, activities that do not count for federal participation rate purposes. Connecticut considers substance abuse treatment, domestic violence counseling, and some adult education programs as work activities. However, Connecticut officials stated that if the proportion of their caseload needing these services increases, they may have more difficulty meeting federally mandated participation rates since these activities are not counted when calculating the rates. HHS has several mechanisms to educate states about the broad latitude granted them in PRWORA. However, HHS has provided little guidance to states on the broad discretion they have to allow hard-to-employ TANF recipients to engage in work-preparation activities that, although not counted for federal participation rate purposes, may be best suited to recipients’ needs. Welfare reform led to major changes in state welfare policy and programs. Only now have states had enough experience with their TANF programs to begin to understand how well these programs are meeting the needs of TANF recipients, particularly those with characteristics that suggest they might be hard to employ. States have found that, while some recipients with these characteristics are able to successfully enter the workforce, many need considerable time and support in order to become work-ready, including services and work-preparation activities that address their specific needs. As a result, some states have begun to implement, or are considering adopting, strategies specifically designed to help hard-to- employ recipients join the workforce. To be successful in moving hard-to-employ TANF recipients into the workforce within their 60-month time limit for federal benefits, states must develop programs and provide work and work-preparation activities tailored to the needs of their hard-to-employ recipients and they must ensure that recipients with characteristics that impede employment have access to programs and activities that meet their needs. Some states believe they would be better able to accomplish this if they (1) had caseload data on the number and characteristics of hard-to-employ TANF recipients, particularly those who will reach their 60-month limit before they are able to work; and (2) used a range of work and work- preparation activities that meet the needs of hard-to-employ recipients, including activities that extend beyond those that meet federal work participation requirements. None of the states we visited, however, have systematically compiled this type of statewide caseload data, and some states are reluctant to provide TANF recipients with many of the types of work-preparation activities that do not count when calculating work participation rates. In addition, estimates of the number and characteristics of TANF recipients likely to reach their 60-month time limit before they can become employed will allow states to determine which recipients could qualify for a hardship exemption, what services and supports will be needed by those who do not, and whether states will provide these services. However, not all of the states we visited have collected or analyzed data on the time it takes recipients to become job-ready in order to estimate the number of TANF recipients likely to exceed their 60-month time limit on benefits. HHS is supporting initiatives that will help states identify hard-to-employ recipients, but so far it has done little to help states systematically analyze these data so that they can be used to estimate the number of TANF recipients who will reach their 60-month limit before becoming employed. In addition, although HHS has several efforts under way to help states use the flexibility allowed under PRWORA, these efforts have not sufficiently resolved the confusion some states have expressed about how to use this flexibility to best serve the needs of their hard-to-employ recipients. Our work revealed some instances in which officials were unclear about how much discretion they have under PRWORA to allow them to provide work- preparation activities that do not count toward federal participation rates, even if these services were needed by hard-to-employ recipients. To help ensure that the states provide hard-to-employ TANF recipients with the services and support they need in order to become employed, and are able to manage TANF caseloads with substantial numbers of hard-to- employ recipients, we recommend that the Secretary of Health and Human Services and Assistant Secretary of ACF take the following actions: promote research and provide guidance that would encourage and enable states to estimate the number and characteristics of hard-to- employ TANF recipients, and identify recipients who will reach their 60- month limit on benefits before they are able to work, and expand the scope of guidance to states to help them use the flexibility PRWORA affords to provide appropriate work-preparation activities to hard-to-employ TANF recipients within the current TANF rules governing work participation rates and federally countable work activities. We provided a draft of this report to HHS for its review. A copy of HHS' response is in appendix III. We also incorporated technical comments we received from HHS, where appropriate. HHS took issue with both of our recommendations. Concerning our first recommendation—that HHS promote research and provide guidance that would encourage and enable states to estimate the number and characteristics of hard-to-employ TANF recipients, and identify recipients who will reach their 60-month limit on benefits before they are able to work—HHS said that this approach overemphasizes the importance of identifying hard-to-employ recipients through measurable characteristics. They stated that “research suggests that while measurable characteristics are helpful in making such predictions , they are imperfect predictors, since many people with presumed barriers to employment nevertheless work.” Yet, HHS appears to support the use of measurable characteristics through their ongoing activities to improve tools and methods to identify these characteristics. Moreover, researchers, state officials, and national experts report that these data are key to ensuring that appropriate services are provided to hard-to-employ recipients. We therefore continue to believe that HHS should promote research and provide guidance that would encourage and enable states to estimate the number and characteristics of hard-to-employ TANF recipients, and identify recipients who will reach their 60-month limit on benefits before they are able to work. With regard to our second recommendation—that HHS expand the scope of guidance to states to help them use the flexibility PRWORA affords to provide appropriate work-preparation activities to hard-to-employ TANF recipients within the current TANF rules governing work participation rates and federally countable work activities—HHS stated that they are already providing such guidance to the states and listed a number of initiatives in addition to those mentioned in this report. We agree with HHS that several of their initiatives appear to have the potential to better inform states of their flexibility under PRWORA. Notwithstanding these efforts, however, during our site visits we discovered that some states and localities did not understand the full range of flexibility they have under the law, which indicates to us that this information is not being thoroughly disseminated to states. As a result, we continue to recommend that HHS expand the scope of their guidance to states to include helping them use their flexibility to provide appropriate work-preparation activities to hard- to-employ recipients within the current TANF rules. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Honorable Tommy G. Thompson, Secretary of Health and Human Services; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. If you have any questions concerning this report, please contact me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix IV. We designed our study to provide information on (1) the participation of Temporary Assistance for Needy Families (TANF) recipients in work and work activities, the characteristics of TANF recipients, and how those characteristics have changed over time; (2) the strategies states are using to help hard-to-employ TANF recipients get and keep jobs; and (3) the challenges states face in planning and implementing programs for hard-to- employ recipients. In doing our work, we analyzed national-level survey and administrative data, collected supplemental data on caseload characteristics from nine states, and conducted site visits at the state and local levels in six of these states. We also reviewed the results of several existing studies conducted in individual states and localities and conducted interviews with numerous experts on welfare reform and with advocates of welfare recipients. We provided a draft of this report to officials in the Department of Health and Human Services (HHS) and the six states we visited for their review. We conducted our work from November 1999 to January 2001 in accordance with generally accepted government auditing standards. To obtain information on the characteristics and composition of the adults in TANF families, we analyzed data from the 1996, 1998, and 2000 Current Population Survey Annual Demographic Surveys (March CPS Supplement). In consultation with Census officials and welfare reform experts, we determined that this was the only source of national data on TANF recipients currently available that included comparable information from the time of welfare reform to the present. Other potential data sources we pursued included the Panel Study of Income Dynamics (PSID), the Survey of Income and Program Participation (SIPP), the National Longitudinal Survey of Youth (NLSY), the National Survey of America’s Families (NSAF), and the Survey of Program Dynamics (SPD). Our statistical analyses of the CPS data included frequency tables, cross-tabulations, and chi-square tests of significance. All of the relationships we have reported using the CPS data analyzed were significant at the p < .05 level. We also obtained state administrative data on TANF recipients’ participation in federally allowable work activities from ACF. To supplement the national data on recipient characteristics, we requested data on TANF caseload characteristics since 1995 from nine states— California, Connecticut, Florida, Maryland, Michigan, New York, Oregon, Washington, and Wisconsin—that were home to nearly half of the nation's TANF families in 2000. Because no state was able to provide all of the information we requested, we could conduct only limited analyses with these data. To obtain further information on the characteristics of TANF recipients and on state strategies for helping hard-to-employ recipients, we reviewed numerous studies that contained this information. To identify the relevant studies we searched several on-line bibliographic databases. We requested information from individuals on Internet mailing lists administered by the Institute for Women’s Policy Research and the Association for Public Policy Analysis and Management. We also consulted with experts on welfare issues, including officials at the Department of Health and Human Services and members of GAO’s Welfare Reform Advisory Committee, to identify other studies we should consider. Data from surveys of TANF recipients were incorporated into our report only if the survey had obtained data on at least 70 percent of the sample of families for which it sought such data, or if a nonresponse analysis of the data showed that there were no important differences between families represented in the data and those missing. Appendix II contains a list of these studies. Except for this assessment, we did not independently verify the data included in the studies. To obtain information about each assignment objective, we interviewed officials in state welfare departments in six of the nine states from which we collected data, as well as officials in at least two local sites in each of these states. Our site visit states were California, Connecticut, Florida, Maryland, Michigan, and Washington. In selecting the six states for our in- depth fieldwork, we sought to include states that represented a variety of approaches to moving hard-to-employ recipients into the workforce and that varied in terms of region, population size, degree of caseload decline, and time of initial welfare reform, and in whether the TANF program was administered at the state or county level. The local sites were chosen by the state-level officials. The TANF families in our six site visit states made up 33 percent of the national TANF caseload in June 2000. During our site visits, we spoke with state and local TANF program administrators, data officers, program analysts, case managers and supervisors, child support officers, Welfare-to-Work liaisons, and private contractors; and with representatives of partnering agencies such as public health departments, departments of labor, departments of vocational rehabilitation, community colleges, and others. The state and local interviews were administered using a semistructured interview guide that we developed through a review of relevant literature and discussions with recognized experts on welfare reform. Some limitations exist in any methodology that gathers information about programs undergoing rapid change, such as welfare reform. Results presented in our report represent only the conditions present in the states and localities we visited at the time of our site visits, between March and July 2000. We cannot comment on any changes that may have occurred after our fieldwork. Furthermore, our fieldwork focused on in-depth analysis of a few selected states and localities. We cannot generalize our findings beyond the six states we visited and their localities. In developing the scope and methodology we would use to address our assignment objectives, we consulted with welfare reform experts from HHS, the Department of Labor, the Bureau of the Census, the Urban Institute, Mathematica, the National Governors’ Association, the American Public Human Services Association, and GAO’s Welfare Reform Advisory Committee. Following our site visits, we conducted phone interviews with advocates for welfare recipients nationally and in each of the six states in an effort to ensure that our understanding of and reporting on the states’ caseloads and strategies were accurate and objective. Born, Catherine, and others. Life On Welfare: Who Gets Assistance 18 Months Into Reform? Baltimore, Md.: University of Maryland School of Social Work, Nov. 1998. Building Bridges: States Respond to Substance Abuse and Welfare Reform. Published by the National Center on Addiction and Substance Abuse at Columbia University (CASA) and the American Public Human Services Association (APHSA), Aug. 1999. Cancian, Maria, and others. Before and After TANF: The Economic Well- Being of Women Leaving Welfare. Madison. Wisc.: Institute for Research on Poverty, University of Wisconsin − Madison, Dec. 1999. Danziger, Sandra, and others. Barriers to the Employment of Welfare Recipients. Ann Arbor, Mich.: University of Michigan, Poverty Research Center and Training Center, Apr. 1999. Department of Mental Health Law and Policy, University of South Florida. Leaving the Welfare Rolls: The Health and Mental Health Issues of Current and Former Welfare Recipients. Tampa, Fla.: Florida Agency for Healthcare Administration, undated. Domestic Violence: Prevalence and Implications for Employment Among Welfare Recipients. GAO/HEHS-99-12. Washington, D.C.: U.S. General Accounting Office, Nov. 1998. Fogarty, Debra, and Shon Kraley. A Study of Washington State TANF Leavers and TANF Recipients: Findings From Administrative Data and the Telephone Survey. Olympia, Wash.: Office of Planning and Research, Economic Services Administration, Department of Social and Health Services, Mar. 2000. Johnson, Amy, and Alicia Meckstroth. Ancillary Support Services to Support Welfare to Work. Mathematica Policy Research, Inc., under contract with the U.S. Department of Health and Human Services. Washington, D.C.: June 1998. http://aspe.hhs.gov/hsp/isp/ancillary/Summary.html. Kirby, Gretchen, and others. Integrating Alcohol and Drug Treatment into a Work-Oriented Welfare Program: Lessons From Oregon. Washington, D.C.: Mathematica Policy Research, Inc., June 1999. Loprest, Pamela, and Sheila Zedlewski. Current and Former Welfare Recipients: How Do They Differ? Washington, D.C.: The Urban Institute, Nov. 1999. Mathematica Policy Research, Inc. How WFNJ Clients Are Faring Under Welfare Reform: An Early Look. Work First New Jersey Evaluation. State of New Jersey Department of Human Services, Oct. 1999. Risler, Ed, and others. The Remaining TANF Recipients: A Research Based Profile. The Georgia Welfare Reform Research Project. Report to the Director of the Division of Family and Children Services. Atlanta, Ga.: Department of Human Resources, State of Georgia, Dec. 1999. Sweeney, Eileen. Recent Studies Indicate That Many Parents Who Are Current or Former Welfare Recipients Have Disabilities and Other Medical Conditions. Washington, D.C.: Center on Budget and Policy Priorities, Feb. 2000. Women on Welfare: A Study of the Florida Work and Gain Economic Self- Sufficiency Population. Tallahassee, Fla.: Florida Department of Children & Families, May 1999. Zedlewski, Sheila. Work-Related Activities and Limitations of Current Welfare Recipients. Washington, D.C.: The Urban Institute, July 1999. In addition to those named above, Sonya Harmeyer, Heather McCallum, and Catherine Pardee made significant contributions to this report. Jeff Appel, Jon Barker, Paula Bonin, Richard Burkard, Patrick DiBattista, Gale Harris, Art Kendall, Lise Levie, Ann McDermott, and Jim Wright also provided key technical assistance. American Public Human Services Association. TANF Client Assessments: Program Philosophies and Goals, Sequencing of Process, Uses of Information and State Changes or Modifications, Promising Practices, and Lessons Learned, Research Notes. Washington, D.C.: Sept. 2000. http://www.aphsa.org/opd/research/researchnotes0900.html. Born, Catherine, and others. Life On Welfare: Who Gets Assistance 18 Months Into Reform? Baltimore, Md.: University of Maryland School of Social Work, Nov. 1998. Brawley, Scott. TANF Client Assessment: State Uniformity, Types of Workers and Staff-Related Actions, Tools and Information Sources, and Information Sharing, Research Notes. Washington, D.C.: American Public Human Services Association, Aug. 2000. http://www.aphsa.org/opd/research/brawley1final.htm. Building Bridges: States Respond to Substance Abuse and Welfare Reform. Published by the National Center on Addiction and Substance Abuse at Columbia University (CASA) and the American Public Human Services Association (APHSA), Aug. 1999. Cancian, Maria, and others. Before and After TANF: The Economic Well- Being of Women Leaving Welfare. Madison, Wisc.: Institute for Research on Poverty, University of Wisconsin − Madison, Dec. 1999. Danziger, Sandra, and others. Barriers to the Employment of Welfare Recipients. Ann Arbor, Mich.: Poverty Research Center and Training Center, University of Michigan, Apr. 1999. Department of Mental Health Law and Policy, University of South Florida. Leaving the Welfare Rolls: The Health and Mental Health Issues of Current and Former Welfare Recipients. Tampa, Fla.: Florida Agency for Healthcare Administration, undated. Derr, Michelle K., Heather Hill, and LaDonna Pavetti. Addressing Mental Health Problems Among TANF Recipients: A Guide for Program Administrators. Washington, D.C.: Mathematica Policy Research, Inc., for the U.S. Department of Health and Human Services, Administration for Children and Families, July 2000. Domestic Violence: Prevalence and Implications for Employment Among Welfare Recipients. GAO/HEHS-99-12. Washington, D.C.: U.S. General Accounting Office, Nov. 1998. Fogarty, Debra, and Shon Kraley. A Study of Washington State TANF Leavers and TANF Recipients: Findings From Administrative Data and the Telephone Survey. Olympia, Wash.: Office of Planning and Research, Economic Services Administration, Department of Social and Health Services, Mar. 2000. Johnson, Amy, and Alicia Meckstroth. Ancillary Support Services to Support Welfare to Work. Washington, D.C.: Mathematica Policy Research, Inc., for the U.S. Department of Health and Human Services, June 1998. http://aspe.hhs.gov/hsp/isp/ancillary/Summary.htm. Kirby, Gretchen, and Jacquelyn Andersen. Addressing Substance Abuse Problems Among TANF Recipients: A Guide for Program Administrators. Washington, D.C.: Mathematica Policy Research, Inc., for the U.S. Department of Health and Human Services, Administration for Children and Families, July 2000. Kirby, Gretchen, and others. Integrating Alcohol and Drug Treatment into a Work-Oriented Welfare Program: Lessons From Oregon. Washington, D.C.: Mathematica Policy Research, Inc., June 1999. Kramer, Fredrica D. “Seeing TANF from the Inside Out—Reconsidering the Program's Role in the Wake of Welfare Reform.” The Forum, Vol. 3, No.2 (July 2000). New York, N.Y.: National Center for Children in Poverty, Columbia University. Kramer, Fredrica. “The Hard-to-Place: Understanding the Population and Strategies to Serve Them.” Issue Notes, Washington, D.C.: Welfare Information Network (March 1998). http://www.welfareinfo.org/hardto.htm. Loprest, Pamela, and Sheila Zedlewski. Current and Former Welfare Recipients: How Do They Differ? Washington, D.C.: The Urban Institute, Nov. 1999. Mathematica Policy Research, Inc. Addressing Mental Health Problems Among TANF Recipients: A Guide for Program Administrators. Washington, D.C.: U.S. Department of Health and Human Services, Administration for Children and Families, July 2000. Mathematica Policy Research, Inc. Addressing Substance Abuse Problems Among TANF Recipients: A Guide for Program Administrators. Washington, D.C.: U.S. Department of Health and Human Services, Administration for Children and Families, July 2000. Mathematica Policy Research, Inc. How WFNJ Clients Are Faring Under Welfare Reform: An Early Look. Work First New Jersey Evaluation. State of New Jersey Department of Human Services, Oct. 1999. Michalopoulos, Charles, Christine Schwartz, and Diana Adams-Ciardullo. What Works Best for Whom: Impacts of 20 Welfare-to-Work Programs by Subgroup. Washington, D.C.: Manpower Demonstration Research Corporation, Aug. 2000. Risler, Ed, and others. The Remaining TANF Recipients: A Research Based Profile. The Georgia Welfare Reform Research Project. Report to the Director of the Division of Family and Children Services. Atlanta, Ga.: Department of Human Resources, State of Georgia, Dec. 1999. Storen, Duke, and K.A. Dixon. State TANF Policy and Services to People With Disabilities. Prepared by the John J. Heldrich Center for Workforce Development at Rutgers, the State University of New Jersey, funded by the U. S. Department of Education, National Institute on Disability and Rehabilitation Research, Nov. 1999. http://www.comop.org/rrtc/rrtc/TANF.htm Sweeney, Eileen. Recent Studies Indicate That Many Parents Who Are Current or Former Welfare Recipients Have Disabilities and Other Medical Conditions. Washington, D.C.: Center on Budget and Policy Priorities, Feb. 2000. Temporary Assistance for Needy Families (TANF) Program: Third Annual Report to Congress. Washington, D.C.: U.S. Department of Health and Human Services, Administration for Children and Families, Aug. 2000. Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. GAO/HEHS-00-48. Washington, D.C.: U.S. General Accounting Office, Apr. 27, 2000. Welfare Reform: Work-Site-Based Activities Can Play an Important Role in TANF Programs. GAO/HEHS-00-122. Washington, D.C.: U.S. General Accounting Office, July 28, 2000. Women on Welfare: A Study of the Florida Work and Gain Economic Self- Sufficiency Population. Tallahassee, Fla.: Florida Department of Children & Families, May 1999. Zedlewski, Sheila, and Pamela Loprest. How Well Does TANF Fit the Needs of the Most Disadvantaged Families? Washington, D.C.: The Urban Institute, Dec. 29, 2000. Zedlewski, Sheila. Work-Related Activities and Limitations of Current Welfare Recipients. Washington, D.C.: The Urban Institute, July 1999. Welfare Reform: Work-Site-Based Activities Can Play an Important Role in TANF Programs (GAO/HEHS-00-122, July 28, 2000). Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort (GAO/HEHS-00-48, Apr. 27, 2000). Welfare Reform: State Sanction Policies and Number of Families Affected (GAO/HEHS-00-44, Mar. 31, 2000). Welfare Reform: Implementing DOT’s Access to Jobs Program in Its First Year (RCED-00-14, Nov. 26, 1999). Welfare Reform: Assessing the Effectiveness of Various Welfare-to-Work Approaches (GAO/HEHS-99-179, Sep. 7, 1999). Welfare Reform: Information on Former Recipients’ Status (GAO/HEHS-99- 48, Apr. 28, 1999). Welfare Reform: States’ Experiences in Providing Employment Assistance to TANF Clients (GAO/HEHS-99-22, Feb. 26, 1999). Welfare Reform: Status of Awards and Selected States’ Use of Welfare-to- Work Grants (GAO/HEHS-99-40, Feb. 5, 1999). Welfare Reform: Child Support an Uncertain Income Supplement for Families Leaving Welfare (GAO/HEHS-98-168, Aug. 3, 1998). Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Welfare Reform: HHS’ Progress in Implementing Its Responsibilities (HEHS-98-44, Feb. 2, 1998). Welfare Reform: States’ Efforts to Expand Child Care Programs (GAO/HEHS-98-27, Jan. 13, 1998). Welfare Reform: Three States’ Approaches Show Promise of Increasing Work Participation (GAO/HEHS-97-80, May 30, 1997). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
|
Although some welfare recipients who might seem hard to employ are able to successfully enter the workforce, others have needed considerable time and support to become work-ready. As a result, some states have begun to implement or are considering strategies to help hard-to-employ recipients join the workforce. To be successful in moving hard-to-employ Temporary Assistance for Needy Families (TANF) recipients into the workforce within their 60-month time limit for federal benefits, states must develop programs and provide work and work-preparation activities tailored to the needs of their hard-to-employ recipients and they must ensure that recipients with characteristics that impede employment have access to programs and activities that meet their needs. Some states believe that they would be better able to accomplish this if they (1) had caseload data on the number and characteristics of hard-to-employ TANF recipients, particularly those who will reach their 60-month limit before they are able to work and (2) used a range of work and work-preparation activities that meet the needs of hard-to-employ recipients, including activities that extend beyond those that meet federal work participation requirements.
|
TSA is responsible for ensuring air carriers’ compliance with regulatory requirements, including requirements reflected in TSA security directives. Related to watch-list matching, TSA outlines air carrier requirements in the No Fly List Procedures security directive, requiring domestic air carriers to conduct checks of passenger information against the No Fly List to identify individuals who should be precluded from boarding flights, and the Selectee List Procedures security directive, directing domestic air carriers to conduct checks of passenger information against the Selectee List to identify individuals who should receive enhanced screening (e.g., additional physical screening or a hand-search of carry-on baggage) before proceeding through the security checkpoint. Since 2002, TSA has issued numerous revisions to the No Fly and Selectee list security directives to strengthen and clarify requirements, and has issued guidance to assist air carriers in implementing their watch-list-matching processes. TSA conducts inspections of air carriers throughout the year as part of regular inspection cycles based on annual inspection plans to determine the extent to which air carriers are complying with TSA security requirements. These inspections are based on inspection guidelines known as PARIS prompts, which address a broad range of regulatory requirements (including airport perimeter security and cargo security, as well as screening of employees, baggage, and passengers). With respect to watch-list matching, inspection guidelines instruct inspectors regarding the aspects of air carrier watch-list matching that should be tested, such as whether air carriers are comparing the names of all passengers against names on the most current No Fly and Selectee lists in accordance with the procedures outlined in TSA’s security directives. TSA conducts watch-list-related inspections at air carriers’ corporate security offices (where policies and procedures are established on how watch-list matching is to be performed) and at airports (where policies and procedures for responding to a potential match are implemented). TSA’s principal security inspectors are responsible for conducting inspections at domestic air carriers’ corporate headquarters. These inspectors assess air carriers’ compliance with security requirements and provide direct oversight of air carriers’ implementation of and compliance with TSA-approved security programs. Field inspectors—known as transportation security inspectors—conduct watch-list-related inspections at airports. They are responsible for a multitude of TSA-related activities, including conducting inspections and investigations of airports and air carriers, monitoring compliance with applicable civil aviation security policies and regulations, resolving routine situations that may be encountered during the assessment of airport security, participating in testing of security systems in connection with compliance inspections, identifying when enforcement actions should be initiated, and providing input on the type of action and level of penalty commensurate with the nature and severity of a violation that is ultimately recommended to TSA’s Office of Chief Counsel. To further enhance commercial aviation security and as required by the Intelligence Reform and Terrorism Prevention Act of 2004, TSA is developing an advanced passenger prescreening program known as Secure Flight to assume from air carriers the function of matching passenger information against government-supplied terrorist watch lists for domestic, and ultimately international, flights. Through assumption of the watch-list-matching function from the air carriers, Secure Flight is intended to ensure a higher level of consistency than current air carrier watch-list matching and also help remedy possible misidentifications if a passenger’s name is similar to one found on a watch list. According to TSA plans, Secure Flight’s benefits, once the program becomes operational, will include eliminating inconsistencies in current air carrier watch-list matching decreasing the risk of unauthorized disclosure of sensitive watch-list information, reducing the number of individuals who are misidentified as being on the No Fly or Selectee lists, and integrating the redress process so that individuals are less likely to be improperly or unfairly delayed or prohibited from boarding an aircraft. TSA expects to begin assuming from air carriers the watch-list matching function for domestic flights in January 2009, and to assume this function from U.S. Customs and Border Protection for flights departing from and to the Unites States by fiscal year 2010. Since the terrorist attacks of September 11, 2001, TSA has imposed, through security directives, requirements for watch-list matching, which include identifying passengers with names similar to those on the No Fly and Selectee lists—a process TSA refers to as similar-name matching. Identifying passengers with names similar to those on the No Fly and Selectee lists is a critical component of watch-list matching because individuals may travel using abbreviated name forms or other variations of their names. Therefore, searching for only an exact match of the passenger’s name may not result in identifying all watch-listed individuals. Before undertaking revisions of the relevant security directives in 2008, TSA expected air carriers to conduct similar-name matching, but TSA’s security directives did not specify how many and what types of such name variations air carriers should compare. Consequently, the 14 air carriers we interviewed reported implementing varied approaches to similar-name matching. Some carriers reported comparing more name variations than others, and not every air carrier reported conducting similar-name comparisons. Air carriers that conduct only exact-name comparisons and carriers that conduct relatively limited similar-name comparisons are less effective in identifying watch-listed individuals who travel under name variations. Also, due to inconsistent air carrier processes, a passenger could be identified as a match to a watch-list record by one carrier and not by another, which results in uneven effectiveness of watch-list matching. Moreover, there have been incidents, based on information provided by TSA’s Office of Intelligence, of air carriers failing to identify potential matches by not successfully conducting similar-name matching. Generally, TSA had been aware that air carriers were not using equivalent processes to compare passenger names with names on the No Fly and Selectee lists. However, in early 2008 the significance of such differences was crystallized during the course of our review and following TSA’s special emphasis inspection of air carriers’ watch-list-matching capability. On the basis of these inspection results, in April 2008, TSA issued a revised security directive governing the use of the No Fly List to establish a baseline capability for similar-name matching to which all air carriers must conform. Also, TSA announced that it planned to similarly revise the Selectee List security directive to require the new baseline capability. According to TSA officials, the new baseline capability is intended to improve the effectiveness of watch-list matching, particularly for those air carriers that had been using less-thorough approaches for identifying similar-name matches and those air carriers that did not conduct any similar-name comparisons. However, because the baseline capability requires that air carriers compare only the types of name variations specified in the security directive, TSA officials noted that the new baseline established in the No Fly List security directive is not intended to address all possible types of name variations and related security vulnerabilities. Agency officials explained that based on their analysis of the No Fly and Selectee lists and interviews with intelligence community officials, the newly established baseline covers the types of name variations air carriers are most likely to encounter. TSA officials further stated that these revised requirements were a good interim solution because, among other reasons, they will strengthen security while not requiring air carriers to invest in significant modifications to their watch- list matching processes, given TSA’s expected implementation of Secure Flight beginning in 2009. If implemented as intended, Secure Flight is expected to better enable the use of passenger names and other identifying information to more accurately match passengers to the subjects of watch-list records. Until 2008, TSA had conducted limited testing of air carriers’ similar-name- matching capability, although the agency had undertaken various efforts to assess domestic air carriers’ compliance with watch-list matching requirements in the No Fly and Selectee list security directives. These efforts included a special emphasis assessment conducted in 2005 and regular inspections conducted in conjunction with annual inspection cycles. However, the 2005 special emphasis assessment focused on air carriers’ capability to prescreen passengers for exact-name matches with the No Fly List, but did not address the air carriers’ capability to conduct similar-name comparisons. Regarding inspections conducted as part of regular inspection cycles, TSA’s guidance establishes that regulatory requirements encompassing critical layers of security need intensive oversight, and that testing is the preferred method for validating compliance. However, before being revised in 2008, TSA’s inspection guidelines for watch-list-related inspections were broadly stated and did not specifically direct inspectors to test air carriers’ similar-name- matching capability. Moreover, TSA’s guidance provided no baseline criteria or standards regarding the number or types of such variations that must be assessed. Thus, although some TSA inspectors tested air carriers’ effectiveness in conducting similar-name matching, the inspectors did so at their own discretion and without specific evaluation criteria. In response to our inquiry, six of TSA’s nine principal security inspectors told us that their assessments during annual inspection cycles have not included examining air carriers’ capability to conduct certain basic types of similar-name comparisons. Also, in reviewing documentation of the results of the most recent inspection cycle (fiscal year 2007), we found that available records in TSA’s database made references to name- matching tests in only 6 of the 36 watch-list-related inspections that principal security inspectors conducted, and in only 55 of the 1,109 inspections that transportation security inspectors conducted. Without baseline criteria or standards for air carriers to follow in conducting similar-name comparisons, TSA has not had a uniform basis for assessing compliance. Further, without routinely and uniformly testing how effectively air carriers are conducting similar-name matching, TSA may not have had an accurate understanding of the quality of air carriers’ watch- list-matching processes. However, TSA began taking corrective actions during the course of our review and after it found deficiencies in the capability of air carriers to conduct similar-name matching during the January 2008 special emphasis inspection. More specifically, following the January 2008 inspection, TSA officials reported that TSA began working with individual air carriers to address identified deficiencies. Also, officials reported that, following the issuance of TSA’s revised No Fly List security directive in April 2008, the agency had plans to assess air carriers’ progress in meeting the baseline capability specified in the new security directive after 30 days, and that the agency’s internal guidance for inspectors would be revised to help ensure compliance by air carriers with requirements in the new security directive. Further, in September 2008, TSA updated us on the status of its efforts with watch-list matching. Specifically, TSA provided us with the results of a May 2008 special emphasis assessment of seven air carriers’ compliance with the revised No Fly List security directive. Although the details of this special emphasis assessment are classified, TSA generally characterized the results as positive. Also, the TSA noted that it plans to work with individual air carriers, as applicable, to analyze specific failures, improve system performance, and conduct follow-up testing as needed. Further, officials noted that the agency’s internal handbook, which provides guidance to transportation security inspectors on how to inspect air carriers’ compliance with requirements, including watch-list-matching requirements, was being revised and was expected to be released later this year. Officials stated that the new inspection guidance would be used in conjunction with TSA’s nationwide regulatory activities plan for fiscal year 2009. However, while these actions and plans are positive developments, it is too early to determine the extent to which TSA will assess air carriers’ compliance with watch-list-matching requirements based on the new security directives since these efforts are still underway and have not been completed. Over the last 4 years, we have reported that the Secure Flight program (and its predecessor known as the Computer Assisted Passenger Prescreening System II or CAPPS II) had not met key milestones or finalized its goals, objectives, and requirements, and faced significant development and implementation challenges. Acknowledging the challenges it faced with the program, in February 2006, TSA suspended the development of Secure Flight and initiated a reassessment, or rebaselining, of the program, which was completed in January 2007. In February 2008, we reported that TSA had made substantial progress in instilling more discipline and rigor into Secure Flight’s development and implementation, including preparing key systems development documentation and strengthening privacy protections. However, we reported that challenges remain that may hinder the program’s progress moving forward. Specifically, TSA had not (1) developed program cost and schedule estimates consistent with best practices, (2) fully implemented its risk management plan, (3) planned for system end-to-end testing in test plans, and (4) ensured that information-security requirements are fully implemented. If these challenges are not addressed effectively, the risk of the program not being completed on schedule and within estimated costs is increased, and the chances of it performing as intended are diminished. To address these challenges, we made several recommendations to DHS and TSA to incorporate best practices in Secure Flight’s cost and schedule estimates and to fully implement the program’s risk-management, testing, and information-security requirements. DHS and TSA officials generally agreed to implement the recommendations and reported that they are making progress doing so. According to TSA officials, the “initial cutover” or assumption of the watch-list matching function from one or more air carriers for domestic flights is scheduled to begin in January 2009. However, as of July 2008, TSA had not developed detailed plans or time frames for assuming watch-list matching from all air carriers for domestic flights. We will continue to evaluate TSA’s efforts to develop and implement Secure Flight and its progress in addressing our prior recommendations as part of our ongoing review. Until the Secure Flight program is implemented, TSA’s oversight of air carriers’ compliance with watch-list-matching requirements remains an important responsibility. In this regard, TSA’s April 2008 revision of the No Fly List security directive—and a similar revision planned for the Selectee List security directive—are significant developments. The April 2008 revision establishes a baseline name-matching capability applicable to all domestic air carriers. Effective implementation of the baseline capability should strengthen watch-list-matching processes, especially for those air carriers that had been using less-thorough approaches for identifying similar-name matches. Concurrently, revised internal guidance for TSA’s inspectors can help ensure that compliance inspections of air carriers are conducted using the standards specified within the security directives as evaluation criteria. At the time of our review, TSA was in the initial stage of revising the internal guidance for inspectors. As a result, it is too early to determine the extent to which updated guidance for principal security inspectors and transportation security inspectors will strengthen oversight of air carriers’ compliance with the security directive requirements. Going forward, TSA officials acknowledge that the baseline capability specified in the revised No Fly List security directive and the similar revision planned for the Selectee List security directive—while an improvement— does not address all vulnerabilities identified by TSA and does not provide the level of risk mitigation that is expected to be achieved from Secure Flight. Thus, TSA officials recognize the importance of—and the challenges to—ensuring continued progress in developing and deploying the Secure Flight program as soon as possible. Madam Chairwoman, this concludes my statement. I would be pleased to answer any questions that you or other members have at this time. For questions regarding this testimony, please contact Cathleen A. Berrick, Director, Homeland Security and Justice Issues, at (202) 512-3404 or [email protected]. Other key contributors to this statement were Mona Blake, Danny R. Burton, Ryan Consaul, R. Eric Erdman, Michele C. Fejfar, Richard B. Hung, Thomas F. Lombardi, Sara Margraf, Victoria E. Miller, Maria Soriano, and Margaret Vo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Domestic air carriers are responsible for checking passenger names against terrorist watch-list records to identify persons who should be denied boarding (the No Fly List) or who should undergo additional security scrutiny (the Selectee List). The Transportation Security Administration (TSA) is to assume this function through its Secure Flight program. However, due to program delays, air carriers retain this role. This testimony discusses (1) TSA's requirements for domestic air carriers to conduct watch-list matching, (2) the extent to which TSA has assessed compliance with watch-list matching requirements, and (3) TSA's progress in developing Secure Flight. This statement is based on GAO's report on air carrier watch-list matching (GAO-08-992) being released today and GAO's previous and ongoing reviews of Secure Flight. In conducting this work, GAO reviewed TSA security directives and TSA inspections guidance and results, and interviewed officials from 14 of 95 domestic air carriers. TSA's requirements for domestic air carriers to conduct watch-list matching include a requirement to identify passengers whose names are either identical or similar to those on the No Fly and Selectee lists. Similar-name matching is important because individuals on the watch list may try to avoid detection by making travel reservations using name variations. According to TSA, there have been incidents of air carriers failing to identify potential matches by not successfully conducting similar-name matching. However, until revisions were initiated in April 2008, TSA's security directives did not specify what types of similar-name variations were to be considered. Thus, in interviews with 14 air carriers, GAO found inconsistent approaches to conducting similar-name matching, and not every air carrier reported conducting similar-name comparisons. In January 2008, TSA conducted an evaluation of air carriers and found deficiencies in their capability to conduct similar-name matching. Thus, in April 2008, TSA revised the No Fly List security directive to specify a baseline capability for conducting watch-list matching and reported that it planned to similarly revise the Selectee List security directive. While recognizing that the new baseline capability will not address all vulnerabilities, TSA emphasized that establishing the baseline capability should improve air carriers' performance of watch-list matching and is a good interim solution pending the implementation of Secure Flight. TSA has undertaken various efforts to assess domestic air carriers' compliance with watch-list matching requirements; however, until 2008, TSA had conducted limited testing of air carriers' similar-name-matching capability. In 2005, for instance, TSA evaluated the capability of air carriers to identify names that were identical--but not similar--to those in terrorist watch-list records. Also, TSA's internal guidance did not specifically direct inspectors to test air carriers' similar-name-matching capability, nor did the guidance specify the number or types of name variations to be assessed. Records in TSA's database for regular inspections conducted during 2007 made reference to name-match testing in only 61 of the 1,145 watch-list-related inspections that GAO reviewed. During the course of GAO's review, and prompted by findings of the evaluation conducted in January 2008, TSA reported that its guidance for inspectors would be revised to help ensure air carriers' compliance with security directives. Although TSA has plans to strengthen its oversight efforts, it is too early to determine the extent to which TSA will provide oversight of air carriers' compliance with the revised security directives. In February 2008, GAO reported that TSA has made progress in developing Secure Flight but that challenges remained, including the need to more effectively manage risk and develop more robust cost and schedule estimates (GAO-08-456T). If these challenges are not addressed effectively, the risk of the program not being completed on schedule and within estimated costs is increased, and the chances of it performing as intended are diminished. TSA plans to begin assuming watch-list matching from air carriers in January 2009.
|
GAO-08-1036SP and GAO-13-519SP. use of performance information has not changed significantly over time government-wide. These survey results are consistent with trends identified in other federal employee surveys government-wide. For example, the Office of Personnel Management (OPM) surveys federal workers with the Federal Employee Viewpoint Survey (FEVS). FEVS is a tool that measures employees’ perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies. OPM creates an index using a smaller subset selected from the FEVS survey responses that are related to agencies’ results-oriented performance culture. OPM also creates additional indices using different subsets of FEVS survey questions related to: (1) leadership and knowledge management; (2) talent management; and (3) job satisfaction. On the results-oriented performance culture index, 27 of the 37 agencies OPM surveyed experienced a decline between 2008 and 2013. Only seven agencies improved during this time period—OPM, the U.S. Departments of Education and Transportation, the Federal Communications Commission, National Labor Relations Board, Railroad Retirement Board, and the Broadcasting Board of Governors. The Office of Management and Budget and the Performance Improvement Council (PIC), work with federal agencies to improve performance across the federal government. Among the PIC’s responsibilities is the charge to facilitate the exchange of useful performance improvement practices and work among the federal agencies to resolve government-wide or crosscutting performance issues. Few federal agencies showed improvement in managers’ use of performance information for decision making between 2007 and 2013, as measured by our use index. Specifically, our analysis of the average use index score at each agency found that most agencies showed no statistically significant change in use during this period. Only two agencies—OPM and the Department of Labor—experienced a statistically significant improvement in managers’ use of performance information. During the same time period, four agencies—the Departments of Energy and Veterans Affairs (VA), the National Aeronautics and Space Administration, and the Nuclear Regulatory Commission—experienced a statistically significant decline in managers’ use of performance information as measured by our index. See table 1 below for agency scores on the use of performance information index. In addition, figure 4 illustrates that SES managers used performance information, as measured by our index, more than non-SES managers both government-wide and within each agency. SES managers government-wide and at nine agencies scored statistically significantly higher than the non-SES managers at those agencies. As shown in figure 4 below, SES and non-SES managers from DHS and VA had the largest gaps in use of performance information between their SES and non-SES managers. In one agency—the National Science Foundation—the trend was reversed, with non-SES managers reporting more favorably than SES managers. However, this difference was not statistically significant. Using the data from our 2013 survey of federal managers, we found that specific practices identified in our previous work on the use of performance information to enhance or facilitate the use of performance information for decision making were significantly related to the use of performance information as measured by our use index. Figure 5 shows the questions that we tested based on each of the practices. We have highlighted those questions and responses that we found to have a statistically significant and positive relationship with the use of performance information index.performance information index for agencies increased when managers reported that their agencies engaged to a greater extent in these practices as reflected in the survey questions. For example, in 2013, OPM managers responded more favorably than the government-wide average on several of the survey questions related to these practices. OPM was one of the two agencies that experienced an increase in use of performance information from 2007 to 2013, as measured by our index. Leading practices state that aligning an agency’s goals, objectives, and measures increases the usefulness of the performance information collected to decision makers at each level, and reinforces the connection between strategic goals and the day-to-day activities of managers and staff. In analyzing the 2013 survey results, we found that managers’ responses to a related survey question were significantly related to the use of performance information controlling for other factors. Specifically, increase in the extent to which individuals agreed that managers aligned performance measures with agency-wide goals and objectives were associated with increase on the five-point scale we used for our use index. Government-wide, an estimated 46 percent of managers at federal agencies reported that managers at their levels took steps to align program performance measures with agency-wide goals and objectives. The Social Security Administration (SSA) and OPM led the 24 agencies with approximately 65 percent of managers reporting that they aligned program performance measures with agency-wide goals and objectives. DHS trailed the other agencies with only 34 percent of their managers reporting similarly. Leading practices state that to facilitate the use of performance information, agencies should ensure that information meets various users’ needs for completeness, accuracy, consistency, timeliness, validity, and ease of use. When analyzing the results of our 2013 survey, we found that managers’ responses to the statement, “I have sufficient information on the validity of the performance data I use to make decisions,” related to their use of performance information. Specifically, individuals who rated their agencies as providing a higher extent of sufficient information on the validity of performance data for decision making, tended to rate their agencies higher on the performance use scale than individuals who rated their agencies lower, controlling for other factors. Having sufficient information on the validity of performance data for decision making had the largest potential effect of the questions included in our model. This question was the strongest predictor in our regression analysis. Government-wide, the percentage of managers responding favorably about having sufficient information on the validity of performance data was particularly low, at about 36 percent. The National Aeronautics and Space Administration (NASA) and OPM led the agencies with more than 50 percent of managers from NASA and OPM responding that they have sufficient information about the validity of performance data for decision- making (58 percent and 54 percent, respectively). The U.S. Department of Agriculture (USDA) and DHS trailed the other agencies with less than 30 percent of their managers responding similarly (28 percent and 21 percent, respectively). Leading practices state that building the capacity for managers to use performance information is critical to using performance information in a meaningful fashion, and that inadequate staff expertise, among other When factors, can hinder agencies from using performance information.we analyzed the results of our 2013 survey, we found that managers who said that their agencies have provided training that would help them to use performance information to make decisions, rated their agencies more positively on our use index. Compared to managers who said their agencies had not trained them on using performance information in decision making, those who said their agencies did rated them higher on the use scale, controlling for other factors. Government-wide, an estimated 44 percent of the managers who responded to our survey reported that their agencies have provided training that would help them to use performance information in decision making. The U.S. Agency for International Development (USAID) led the agencies in this area, with 62 percent of USAID managers responding that their agencies have provided training that would help them use of performance information in decision making in the last 3 years. The U.S. Department of the Treasury (Treasury), DHS, the Nuclear Regulatory Commission (NRC), and the Environmental Protection Agency (EPA) trailed the other agencies with less than 35 percent of their managers responding similarly (Treasury and DHS with 34 percent, NRC with 33 percent, and EPA with 32 percent) that they had received training on use of performance information in the last 3 years. Other types of training did not appear to be positively related to use of performance information. Specifically, training on developing performance measures was significantly—but negatively—related to use of performance information. Training on (1) setting program performance goals; (2) assessing the quality of performance data; and (3) linking program performance to agency strategic plans was not found to relate to managers’ use of performance information after controlling for other information. Leading practices state that the demonstrated commitment of leadership and management to achieving results and using performance information can encourage others to embrace using a model that uses performance information to make decisions. When we analyzed the results of our 2013 survey, we found that managers’ responses to the statement, “My agency’s top leadership demonstrates a strong commitment to achieving results,” were significantly and positively related to the use of performance information. Specifically, on average, increases in a manager’s rating of the strength of their agency’s top leadership’s commitment to achieving results were associated with increased ratings of their agencies on the use scale, controlling for other factors. Government-wide, the percentage of federal managers responding favorably about their agencies’ top leadership demonstrating a strong commitment to achieving results was an estimated 60 percent. Managers from NRC (78 percent) and SSA (74 percent) had significantly higher scores on this question than the government-wide average, while managers from DHS (44 percent) and USDA (42 percent) had lower scores than the government-wide average. Leading practices state that communicating performance information frequently and effectively throughout an agency can help managers to inform staff and other stakeholders of their commitment to achieve agency goals and to keep these goals in mind as they pursue their day- to-day activities. When analyzing the results of our 2013 survey, we found that two related questions were significantly and positively related to an agency’s use of performance information: Agency managers/supervisors at my level effectively communicate performance information routinely. Employees in my agency receive positive recognition for helping the agency accomplish its strategic goals. Specifically, those who reported favorably that agency managers/supervisors at their levels effectively communicated performance information routinely tended to rate their agencies somewhat higher on the use index, controlling for other factors. Similarly, those who reported favorably that employees in their agency receive positive recognition for helping the agency accomplish its strategic goals rated their agencies somewhat higher on the use scale, controlling for other factors. An estimate 41 percentage of managers government-wide who responded to our survey reported that agency managers/supervisors at their level effectively communicated performance information routinely. About 60 percent of managers at the Small Business Administration, Department of Labor, and OPM, responded positively when asked about effectively communicating performance information routinely (62 percent, 61 percent, and 60 percent respectively). DHS trailed the other agencies with only 34 percent of its managers reporting similarly. Government-wide, an estimated 42 percent of the managers responded favorably when asked about employees in their respective agencies receiving positive recognition for helping the agencies accomplish their strategic goals. While the managers at NRC and the U.S. Department of Commerce scored at or higher than 50 percent when asked about positive recognition (58 percent and 50 percent, respectively), DHS trailed federal agencies with only 34 percent of its managers reporting similarly. Our analyses of agency-level results from our periodic surveys of federal managers in 2007 and 2013 reinforce that there are several leading practices and related survey questions that significantly influenced agencies’ use of performance information for management decision making. However, our surveys show that such usage generally has not improved over time. This information can be helpful to the Office of Management and Budget (OMB) and the Performance Improvement Council as they work with federal agencies to identify and implement stronger performance management practices to help improve agency use of performance information. Moreover, the use of performance information will remain a challenge unless agencies can narrow the gap in use between Senior Executive Service (SES) and non-SES managers. We provided a draft of this report to the Director of OMB and to the 24 agencies that responded to our 2007 and 2013 federal managers surveys. On September 4, 2014, OMB staff provided us with oral comments and generally agreed with our report. OMB staff also stated that they would continue to work with agencies to address the use of performance information through agencies’ annual strategic reviews of progress toward agencies’ strategic objectives, which began in 2014. We also received comments from the U.S. Departments of Commerce (Commerce) and the Treasury (Treasury), the General Services Administration (GSA), and the National Aeronautics and Space Administration (NASA). On August 27, 2014, the liaison from NASA e- mailed us a summary of NASA officials’ comments. On August 28, 2014, the liaison from GSA e-mailed us a summary of GSA officials’ comments. On August 29, 2014, the liaisons from Commerce and Treasury e-mailed us summaries of their respective agency officials’ comments. Commerce and GSA generally agreed with our report, and provided technical comments, which we incorporated as appropriate. NASA and Treasury raised concerns about the findings and conclusions in our report, including the design of the surveys. We discuss their comments and our evaluation of them below, which generally fell into the following four categories: NASA and Treasury raised concerns about the underlying methodology for the 2007 and 2013 federal managers surveys. They said that it did not adequately provide agency-wide perspectives that fully represented the agencies’ use of performance information. Specifically, NASA and Treasury expressed concerns about the lack of demographic information about the survey respondents (e.g. survey respondents by agency component and geographic location). Treasury also expressed concern as to whether we had included senior leadership in our report. To address this comment, we added some additional information to our report that discusses our survey design and administration, specifically that we did not collect demographic information beyond whether a federal manager’ was a member of the SES or not (non-SES). Moreover, our stratified random sample of federal managers ensured that we had a representative sample of federal managers both government-wide and within each of the 24 agencies we surveyed. It was not our objective to design the survey and draw a sample of managers that would allow us to report in a generalizable way at the geographic location or organizational level within an agency. Designing a sample to produce estimates at the geographic location and/or organizational level within an agency would result in a much larger sample than the approximately 107,326 managers selected in our 2007 survey and the approximately 148,300 managers selected in our 2013 survey. Nevertheless, as previously discussed, our sample was sufficient for the purposes of this report. NASA and Treasury also expressed concern that despite all the efforts their respective agencies have undertaken to implement the GPRA Modernization Act of 2010, our draft report did not provide information on the root causes for the lack of progress in the use of performance information in their agencies. For example, NASA cited some of its agency initiatives, including the development of an automated performance management data repository to assist in the agency’s decision-making process. Treasury cited its Quarterly Performance Review process as an example of the agency’s commitment to using performance information in decision making. We recognize the activities that the agencies have underway to improve staff engagement on the use of performance information for decision making, and have previously reported on some of these initiatives. However, despite the efforts discussed above, our survey results showed that the use of performance information, as reported by managers at the agencies, has not improved within agencies between 2007 and 2013. Our report analyzed the results from specific questions in both the 2007 and 2013 surveys. We agree that our report does not provide information on the root causes for the trends we found in the use of performance information. However, the results of the regression analysis in this report point to some specific practices that can enhance the use of performance information, areas where federal agencies may want to focus further analysis and efforts. Both NASA and Treasury requested their respective agencies’ 2007 and 2013 survey data sets to perform additional analyses that might provide further insights into root causes underlying the trends in the use of performance information within their agencies. Treasury also commented that the rankings we report based on the average scores on the 2013 use of performance information index might imply that agencies with a higher ranking are theoretically better at using performance information, and therefore, have superior performance management practices. Treasury also raised concerns about our use of the index to score agencies. It asked if it should view the higher-ranking agencies as examples of what agencies should do to improve the use of performance information. There is not a huge difference in scores between those agencies that scored higher on the use index than others at the lower end. But, we believe our methodology is useful for generally distinguishing between agencies’ levels of use of performance information, and for assessing change in use of performance information over time. However, we revised our report to focus on agencies’ scores rather than on rank ordering. We also did additional statistical testing to determine whether or not the changes between the 2007 and 2013 use indexes were statistically different among agencies. As for the implication of the rankings on the quality of management practices in particular agencies, in 2007, we did employ a use index to identify agencies for further case study analysis. We selected an agency that had significantly improved on the use index along with agencies that scored lower on the index to assess whether there were any promising practices or challenges facing those agencies. NASA, Treasury, and Commerce all commented that it was difficult to tell how managers may have interpreted the term “performance information” when responding to our surveys. Treasury further commented that it was unclear what information managers were using to make management decisions if they were not using performance information. In both the 2007 and 2013 surveys, we defined the terms “performance information” and “performance measures” in the broadest sense. To clarify this point, we added the definition of performance information from the 2013 managers survey in the report. Moreover, as discussed above, additional agency analysis of the root causes underlying the use of performance information could provide some additional context to the types of information agencies are using for decision making. The following 20 agencies had no comments on the draft report: the U.S. Departments of Agriculture, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, and Veterans Affairs, the Environmental Protection Agency, Nuclear Regulatory Commission, Office of Personnel Management, National Science Foundation, Small Business Administration, Social Security Administration, and the United States Agency for International Development. The written response from the Social Security Administration is reproduced in appendix II. We are sending copies of this report to the agencies that participated in our 2013 managers survey, the Director of OMB, as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In analyzing the results of our 2013 survey, we explored whether federal managers’ responses to certain survey questions could help explain differences in how managers in agencies reported using performance information. To examine which factors related to agency use of performance information, as measured by the use of performance information index, we conducted regression analysis. The regression analysis allowed us to assess the unique association between our outcome variable—the performance information index—and a given predictor variable, while controlling for multiple other predictor variables. To create the use of performance index, we identified survey questions that reflected managers’ use of performance information for key management activities and decision making. The 2013 use of performance index included most of the questions included in our 2007 index, and additional questions from the 2013 managers survey that we determined reflected the concept of use of performance information (see figure 1 for specific questions included in our index). core set of items from the original index, we tested the impact of including and excluding several additional questions related to performance management use to ensure the cohesiveness and strength of our revised index. Our revised index is an average of the questions used for the index and runs from 1 to 5, where a 1 reflects that managers feel the agency engages “to no extent” and a 5 reflecting that managers feel the agency engages “to a very great extent” in the use of performance information activities. We found the index met generally accepted standards for scale reliability. For more information on the original index we created for the 2007 ffederal managers survey, see GAO-08-1026T. To develop our regression model examining predictors of performance use as measured by our index, we first identified a series of variables that were related to one of the five practices we have previously found to enhance or facilitate use of performance information. These practices include: aligning agencywide goals, objectives, and measures; improving the usefulness of performance information; developing the capacity to use performance information; demonstrating management commitment; and communicating performance information frequently and effectively. See figure 3 for the specific questions related to these five practices that we included in the regression. Although we identified other questions also related to the five elements of effective performance management, many of these questions were already accounted for in our use index of performance information, and we excluded them from consideration in our regression. Overall, our results demonstrate that some types of management practices and training are more positively correlated than others, with manager perceptions of performance information use as measured by the use index, even when controlling for other factors. Further, these results suggest that certain specific efforts to increase agency use of performance information—such as increasing timeliness of performance information and providing information on the validity of performance measures—may have a higher return than others. To execute our analysis, we began with a base model that treated differences in managers’ views of agency performance management use as a function of the agency where they worked. We found that despite statistically significant differences on average among managers at different agencies, a regression model based on agency alone had very poor predictive power (R-squared of .03). We next examined whether managers’ responses to other items reflecting the practices of effective performance management related to their perceptions of agency use of performance information, independent of agency. We found that several items consistently predicted increases on individuals’ ratings of their agencies use of performance management information, including whether managers align program performance measures with agency goals and objectives; having information on the validity of performance measures; and training on how to use performance management information in decision making. We also tested this model controlling for whether a respondent was a member of the Senior Executive Service (SES), and found similar results. We also tested our model with a variable to control for agency size in five categories. We found that, relative to the largest agencies (100,000 or more employees), managers at smaller agencies tended to rate their agency’s use of performance information slightly lower. The significance and magnitude of other significant variables was similar whether we controlled for agency size or using intercepts to control for individual agencies. Our final model had an R-squared of .65, suggesting that the independent variables in the model predicted approximately 65 percent of the variance in the use index. Specific results are presented in table 2 below. Each coefficient reflects the average increase in the dependent variable, our five-point use scale, associated with a one-unit increase in the value of the independent variables. Note that in our discussion, we highlight the maximum potential impact of each variable rather than the increase in the use score associated with each increase in a dependent variable. As seen in table 2, at least one question related to each of the five practices to enhance agencies’ use of performance information was significant. With respect to aligning agencywide goals, objectives, and measures, we found that each increase in terms of the extent to which individuals felt that managers aligned performance measures with agencywide goals and objectives was associated with a .13 increase in their score on the use scale, or approximately a .52 increase on the 5- point use scale when comparing individuals in the lowest to the highest categories. In terms of improving the usefulness of performance information, we found that having information on the validity of performance data for decision making was the strongest predictor in our model. Compared to individuals who said that they did not have sufficient information on the validity of performance data for decision making, on average, individuals who said they had a very great extent of information rated their agencies approximately 0.64 points higher on the performance use scale, controlling for other factors. In contrast, the potential effect of the timeliness of information, while significant, had a smaller potential impact on managers’ perceptions of their agency’s use of performance information. On average, managers who responded “to a very great extent” on whether their agency’s performance information was available in time to manage programs or projects rated their agency about .28 points higher on the performance use scale than those who responded “to no extent.” In terms of developing agency capacity to use performance information, we found that one type of training was positively related to use of performance information, though other types of training were either not related or were negatively related, after controlling for other factors. Compared to managers who said their agencies had not trained them with training on how to use performance information in decision making, those who said their agencies did provide such training rated their agencies an average of .14 points higher on the use scale, controlling for other factors. The potential effect of this type of training was relatively small compared to the potential effect of some of the other predictors in our model. In contrast, training in developing performance measures was negatively associated with managers’ perceptions of performance information use. With respect to demonstrating management commitment, managers that rated their agency’s leadership highly in terms of demonstrating a strong commitment to achieving results tended to rate their agencies higher on performance information use, as measured by our use index. Each increase in the extent to which a manager felt their agency leadership was committed to results was associated with a .08 increase in the performance use index, or up to a .32 increase in the five-point performance use index when comparing managers who reported “no extent” of leadership commitment to those that reported “a very great extent.” Two questions related to communicating performance information frequently and effectively were significantly and positively associated with manager’s perceptions of an agency’s use of performance information, controlling for other factors. Compared to those who rated their agencies the lowest in terms of whether managers and supervisors effectively communicated performance information routinely—those managers who rated their agencies most highly averaged .32 points higher on the five- point performance use index. Similarly, managers who reported that employees in their agency received “a very great extent” of positive recognition for helping the agency to accomplish strategic goals rated their agencies an average of .24 points higher on performance information use, as measured by our use index. We did not find a statistically significant relationship between the accessibility of performance information (to managers, employees or the public) and managers’ perceptions of use of performance information. To conduct our analysis, we used Stata software to generate regression estimates that incorporated variance calculations appropriate for the To ensure that large amounts of complex design of the survey data.missing data do not result from listwise deletion, we imputed values for individual questions if the individual is missing or indicated “no basis to judge” on three or fewer responses from the 23 variables initially tested in the regression, using the agency-level average to impute. Individuals missing data on more than 3 of the 23 potential variables were dropped from the analysis. We conducted a variety of sensitivity checks to ensure that our results were robust across different specifications and assumptions. For the most part, we found generally similar patterns across models in terms of the magnitude and significance of different variables related to the elements of effective performance management. In general, our models assume that the relationship between the independent and dependent variables is linear, and that changes in the dependent variable associated with a change in the independent variable are similar across each ordinal category. Under this specification, the change in the use index associated with a shift from “to no extent” to “to a small extent” is assumed to be similar to the change associated with an increase from “to a great extent” to “a very great extent”. To determine whether the linear specification was appropriate, or consistent with the observed data, we tested versions of our models that treated independent variables with a Likert-scale response as categorical. We found our results to be robust across a variety of specifications, including those that relaxed the assumption of linearity for responses based on a five-point scale. In addition to the contact named above, Sarah Veale (Assistant Director), Margaret McKenna Adams, Tom Beall, Mallory Barg Bulman, Chad Clady, Karin Fangman, Cynthia Jackson, Janice Latimer, Donna Miller, Anna Maria Ortiz, Kathleen Padulchick, Mark Ramage, Joseph Santiago, Albert Sim, and Megan Taylor made key contributions to this report.
|
GAO has long reported that agencies are better equipped to address management and performance challenges when managers effectively use performance information for decision making. However, GAO's periodic surveys of federal managers indicate that use of performance information has not changed significantly. GAO was mandated to evaluate the implementation of the GPRA Modernization Act of 2010. GAO assessed agencies' use of performance information from responses to GAO's surveys of federal managers at 24 agencies. To address this objective, GAO created an index to measure agency use of performance information derived from a set of questions from the most recent surveys in 2007 and 2013, and used statistical analysis to identify practices most significantly related to the use of performance information index. Agencies' reported use of performance information, as measured by GAO's use of performance information index, generally did not improve between 2007 and 2013. The index was derived from a set of survey questions in the 2007 and 2013 surveys that reflected the extent to which managers reported that their agencies used performance information for various management activities and decision making. GAO's analysis of the average index score among managers at each agency found that most agencies showed no statistically significant change in use during this period. As shown in the table below, only two agencies experienced a statistically significant improvement in the use of performance information. During the same time period, four agencies experienced a statistically significant decline in the use of performance information. Legend statistically significant decrease statistically significant increase GAO has previously found that there are five leading practices that can enhance or facilitate the use of performance information: (1) aligning agency-wide goals, objectives, and measures; (2) improving the usefulness of performance information; (3) developing agency capacity to use performance information; (4) demonstrating management commitment; and (5) communicating performance information frequently and effectively. GAO tested whether additional survey questions related to the five practices were significantly related to the use of performance information as measured by the index. GAO found that the average use of performance information index for agencies increased when managers reported their agencies engaged to a great extent in these practices as reflected in the survey questions. For example, the Office of Personnel Management (OPM) was one of the two agencies that experienced an increase in use of performance information from 2007 to 2013, as measured by the GAO index. In 2013, OPM managers responded more favorably than the government-wide average on several of the survey questions related to these practices. GAO is not making recommendations in this report. Office of Management and Budget staff generally agreed with the report. Four agencies (the Departments of Commerce and the Treasury, the General Services Administration (GSA), and the National Aeronautics and Space Administration (NASA)) provided comments that are addressed. Commerce and GSA agreed with the report. Treasury and NASA raised concerns about the findings and conclusions in this report, including the design of the surveys. GAO continues to believe its findings and conclusions are valid as discussed in the report. Twenty other agencies did not have comments.
|
The purpose of Title III of the OAA is to help seniors maintain independence in their homes and communities by providing appropriate support services and promoting a continuum of care for the vulnerable elderly. The OAA laid the foundation for the current aging services network. This network is comprised of 56 state units on aging (SUA), 629 area agencies on aging (AAA), 244 tribal and Native American organizations, and 2 organizations serving Native Hawaiians, as well as nearly 20,000 local service provider organizations. These organizations are responsible for the planning, development, and coordination of a wide array of home and community-based services within each state under Title III of the OAA. This testimony focuses on three categories of services— those provided under parts B, C, and E of Title III of the OAA. Part B covers, among other things, supportive services and senior centers, including transportation, help with homemaker tasks and personal care, and adult day care. Part C covers nutrition services, including home- delivered and congregate meals. Part E authorizes the National Family Caregiver Support Program, which provides counseling, support groups, and relief from caregiver duties (respite services) for caregivers. (See table 1.) AoA at the Department of Health and Human Services provides grants to the states through the SUAs. Grant amounts are based on funding formulas weighted to reflect a state’s age 60 and over population, which is generally the group eligible for services. For example, in fiscal year 2009, the state of Florida received about $87 million in Title III dollars compared to the state of Montana, which received $6 million, because more seniors reside in Florida. SUAs then typically allocate funds to Area Agencies on Aging (AAA) to directly provide services or to contract with local service providers. In a few states, the SUA directly allocates funds to local providers or provides services. (See fig. 1.) A significant amount of program funding is also provided to state and local agencies by other sources, such as federal Medicare and Medicaid, states, private donations, and voluntary contributions from seniors for services they receive. According to a 2009 study published by the National Association of Area Agencies on Aging and Scripps Gerontology Center of Miami University, 99 percent of AAAs secure funds from additional sources, and the average AAA utilized funding from six sources to provide services in their communities. The amount secured by AAAs varies. OAA services are available to all people age 60 and older who need assistance. The law did not, however, establish an open-ended entitlement available to all seniors, nor was it intended to meet all of seniors’ needs. OAA requires providers to target, or place a priority on reaching, seniors with the greatest economic and social need, and defines them as individuals who have an income at or below the poverty level, or who are culturally, socially, or geographically isolated, face language barriers, or have physical and mental disabilities. Targeting these seniors who are most in need may include a local agency locating a congregate meal site in a low-income neighborhood or working collaboratively with organizations that represent minority seniors. In addition, some services are targeted to vulnerable groups by definition. Examples of these include the long-term care ombudsman program, family care-giver support services, and assisted transportation to those with limited mobility. OAA gives state and local agencies flexibility in determining which populations to target. The recent health care reform legislation—the Patient Protection and Affordable Care Act—contains new provisions for senior health care, including one removing barriers to home- and community-based services under Medicaid. While these changes may shift the provision of some services for seniors from OAA to Medicaid, the extent of this shift is unknown; nevertheless, seniors will likely continue to look to OAA-funded providers for a range of assistance. Local agencies who responded to our survey identified home-delivered meals and transportation as frequently requested services in fiscal year 2009. These agencies also said they receive many requests for information and assistance services—help locating resources and programs—and for respite for caregivers. In preliminary responses to our survey, 49 of those 61 local agencies said more seniors requested home-delivered meals than congregate meals. Forty-four of our 67 survey respondents thus far cited transportation and 43 cited information and assistance as the support services requested most frequently. One local official we spoke with in Wisconsin highlighted the importance of transportation services for his rural clients, while an agency official in Massachusetts said OAA transportation services can be important in urban settings because seniors often prefer them to mass transit options. In addition, 36 of the 63 local agencies who have responded to our survey and track such requests said respite services were most frequently requested by caregivers in fiscal year 2009. Respite care provides temporary caregiving for seniors so that a family member can take a break or engage in other activities. Some agencies responding to our survey said they are currently unable to meet all requests for services. Thirteen of 67 agencies said they are generally or very unable to serve all clients who request home-delivered meals; 15 of the 63 agencies that provide transportation services said they are generally or very unable to meet all transportation requests. Of the 64 agencies that provide respite care, 17 said they were generally or very unable to meet all requests. State and local officials we spoke with also said requests for some OAA services are increasing. Specifically, officials at several local agencies we visited described increased requests for home-delivered meals, transportation, or home-based services. Officials attributed these increases to several factors. First, some agency officials said there are increasing numbers of Americans who are age 60 and older and eligible for services. According to U.S. Census data, more than 9 million more Americans were 60 years and older in 2009 than in 2000, and the Census Bureau projects that population group will continue to grow. Secondly, some agency officials told us requests for OAA services such as home- delivered meals and home-based care are increasing as more seniors stay in their homes longer rather than move to assisted living facilities or nursing homes. For example, state officials in Wisconsin said their client population is increasingly older and those who remain in their homes less likely to go out, leading many to request home-delivered meals. Lastly, most agencies who responded to our survey said requests for services have increased since the economic downturn began. Forty-eight of 61 said they have received increased requests for home-delivered meals, 44 of 62 for support services such as transportation, and 40 of 61 agencies for caregiver services since the downturn began. Twenty-five of 60 agencies said they had increased requests for congregate meals, even as long-term trends show a decline in use of this service. A survey conducted by the National Association of State Units on Aging to determine the impact of the economic crisis on state-provided services also found requests for the types of services provided by OAA increased, particularly for home-delivered meals, transportation, and personal care. Some researchers have concluded that older Americans have been hard hit by the economic recession for reasons such as depreciating home values and retirement accounts. These increasing economic challenges may lead to increased need for services like those provided by OAA programs. Given the number of agencies that cannot meet all requests for services and the increasing demand for certain services, agencies must make decisions about which applicants to serve. To reach and serve seniors with the greatest economic or social need, local agencies responding to our survey reported a range of strategies. Over 50 of 67 agencies said they advertise, conduct outreach, and coordinate with other local organizations to reach and provide services to seniors who are targeted by OAA: seniors who are low-income, minority, or live in rural areas. At least 47 of 67 said they use these approaches to reach seniors who speak limited English, another group targeted by OAA. Additionally, most local agencies reported screening potential clients to assess, whether seniors requesting home- delivered meals or respite care had physical limitations that made these types of services particularly beneficial. For example, at one local agency where demand often exceeds supply, an official said preference may be given to those most at risk for hospitalization due to diagnosed malnutrition or chronic diseases managed through nutrition, such as diabetes. Most local agencies did not screen for congregate meals or transportation services. Some officials we spoke to said there are additional seniors who need services but do not contact OAA providers to request them. For example, one local official in Illinois said needs assessments and anecdotal information indicate a much greater need for services than requests to the agency indicate. Similarly, researchers from one organization we spoke with surmised that if more seniors knew about the types of services available through Title III, the requests for such services would be greater. Local agencies have adopted a number of coping mechanisms to address seniors’ requests and decreased funding. Preliminary responses to our survey indicate agencies utilize the flexibility provided by the OAA to transfer funds among Title III programs to meet requests from seniors for services. Twenty-eight of 61 local agencies responding to our question said they transferred funds among programs in fiscal year 2009, most often removing funds from congregate meals, which are less requested, to home- delivered meals or other services. On a national level, nearly 20 percent of OAA funding for congregate meals in fiscal year 2008 was transferred out of the program by states and split almost evenly between home-delivered meals and support services, AoA data show. (See fig. 2.) As a result, support services and home-delivered meal programs experienced an 11 percent and 20 percent net increase, respectively, in Title III funds. On the state level, 34 states transferred funds from congregate meals to home- delivered meals in fiscal year 2008, according to AoA data. The ability to transfer funds offers states flexibility, yet some officials have questioned the need for meal funding to arrive in two streams. For example, Wisconsin state officials said maintaining separate funding for congregate and home-delivered meals creates a cumbersome process in which the state has to deal with multiple rules to allocate funds to services that are most needed. Similarly, Rhode Island state officials said they would like to see a single Title III, Part C, meal program because requests for congregate meals have decreased. In addition, in fiscal year 2008, 32 states transferred funds from the congregate meal program to Title III, Part B, services such as personal care, homemaker assistance, and transportation services. Local officials in Wisconsin told us federal funding for Part B services is not sufficient to meet requests. In addition to receiving federal funding, the programs created by Title III of OAA receive funding from other sources as well. (See fig. 3.) OAA funds to states and local agencies increased in fiscal year 2009 by $97 million due to Recovery Act funding explicitly for meal programs. But many of the local agencies responding to our survey reported overall decreases in funding from fiscal year 2009 to fiscal year 2010. Forty-four of 64 local agencies said state funding – the second largest source of funding for these programs nationally—decreased for fiscal year 2010. This is consistent with information reported by the National Association of State Units on Aging (NASUA). NASUA found that most states reported state budget shortfalls in fiscal year 2010 and reduced budgets for aging services. Local agencies also use funds from local governments, voluntary client contributions, and private sources, and our preliminary survey results indicate these funds also declined in fiscal year 2010. Some local agencies responding to our survey reported reducing services as a result of funding cuts. Twelve of 64 local agencies said they reduced support services, an additional 12 of 63 reported reducing nutrition services, and 9 of 64 reported reducing caregiver services. To replace lost state and local monies and maintain service levels to seniors, just under half of those responding to our survey said they took some steps to reduce administrative and operations costs and used Recovery Act funds to fill budgeting gaps. In our preliminary survey results, 27 of 65 agencies reported cutting administrative expenses, 22 of 54 reported cutting capital expenses, and 26 of 62 reported cutting operating expenditures in fiscal year 2010. Local agencies responding to our survey said they cut expenses in many ways such as by relocating to a smaller building with lower overhead costs, stretching meal service supplies, decreasing travel expenses, and limiting raises for employees. Additionally, 29 of 63 said they did not fill vacant positions. These preliminary survey data are consistent with what we heard from state officials on our site visits. State officials in Wisconsin, for example, told us that as a result of the state’s budget deficit, the agency was unable to fill vacant positions and had cut planning, administration, and monitoring activities in order to avoid cutting services to seniors. Illinois state officials told us the last budget cycle included a 10 percent decrease in state funds for aging services, and there were layoffs, required furlough days, and positions left vacant as a result. Some state and local agencies we visited also told us they adapt to limited funding or increased requests for services by providing less service to all rather than full service to only some. For example, a local official in Massachusetts said that some seniors are given fewer transit rides so others can be accommodated. A state official in Illinois said some local areas resolve the funding shortfalls by reducing the number of hours they provide respite services for each caregiver. Local agencies said they used Recovery Act funds to fill meal budget gaps or to expand existing nutrition programs or create new ones. Nationw ide, the Recovery Act provided $65 million for congregate meals and $32 million for home-delivered meals, or about 13 percent of the total OAA allocation for meals in fiscal year 2009. Unlike regular Title III meal funds, Recovery Act meal funds could not be transferred among prog Thirty-nine of 61 local agencies said it was moderately to extremely challenging that Recovery Act funds could not be transferred among meal programs. rams. OAA Title III programs are an invaluable support mechanism for many seniors, providing a varied network of care and services as they age. Seniors’ needs for the types of services provided through these programs will only increase over time since demographic studies show a larger proportion of Americans will be age 60 and older over the next few decades. Programs that allow seniors to remain in their own homes and communities afford seniors the independence and dignity they desire. As current fiscal stress and looming deficits continue to constrain available resources, it will be increasingly important for all elements of the home and community-based service network to focus services on those in greatest need. Mr. Chairman, this concludes my prep answer any questions you may have. To determine the Title III services requested most often, local agencies’ use of federal funds, and steps agencies take to deliver resources to those most in need, we conducted a web-based random national sample survey of 125 Area Agencies on Aging (AAA). The survey included questions about: (1) utilization of OAA Title III services, (2) requests for OAA Title III services, (3) approaches for measuring unmet need to target resources to areas of greatest need, (4) use of OAA Title III funds, and (5) the economic climate and use of American Recovery and Reinvestment Act (Recovery Act) funds. We drew a simple random sample of 125 agencies, from a pool of 638 agencies. This included all 629 area agencies on aging (AAA) that operate in the 50 states and District of Columbia, as well as nine State Units on Aging (SUA) in states that do not have AAAs. We included these nine state agencies in our pool for sample selection because the SUA performs the function of AAAs in those states. We conducted four pretests to help ensure that survey questions were clear, terminology was used correctly, the information could be obtained, and the survey was unbiased. Agencies were selected for pre-testing to ensure we had a group of agencies with varying operating structures, budget sizes, and geographic regions of the country. As a result of our pretests, we revised survey questions as appropriate. In June 2010, we notified the 125 AAAs that were selected to complete our survey and e-mailed a link to complete the Web survey to these agencies beginning July 1, 2010. The survey is on-going, and the information included in this testimony presents preliminary results, based on the 67 responses (54 percent) we received as of July 30, 2010. Some individual questions have lower response rates. The practical difficulties of conducting any survey may introduce nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire to minimize such nonsampling error. Due to the preliminary nature of the results, the information presented in this testimony is not intended to be generalizable to all AAAs. We also reviewed relevant statutory provisions and used site visit interviews and Administration on Aging (AoA) State Program Report data to answer our two research questions. In March 2010, we visited Illinois, Massachusetts, Rhode Island, and Wisconsin. These states were selected due to varying sizes of the population age 60 and over and Title III expenditures. Additionally, we considered geographic region, proximity to AoA regional support centers, and a desire to interview at least one state without AAAs (Rhode Island). We interviewed officials from the SUA, AAAs, and AoA regional support centers. We also analyzed AoA State Program Report data available on the agency’s Web site and at www.agidnet.org. We assessed the validity and reliability of this data by interviewing AoA officials, assessing official’s responses to a set of standard data reliability questions, and reviewing internal documents used to edit and check data submitted by states. We determined the data were sufficiently reliable for purposes of this review. To determine steps agencies take to deliver resources to those most in need, we also analyzed the most recently available state aging plan for the 50 states and District of Columbia. Each state is required to submit a state aging plan to AoA for review and approval covering a two, three, or four year period. The aging plan should include state long-term care reform efforts with an emphasis on home and community-based services, strategies the state employs to address the growing number of seniors, and priorities, innovations and progress the state seeks to achieve in addressing the challenges posed by an aging society. For future contact regarding this testimony, please contact Kay Brown at (202) 512-7215 or e-mail [email protected]. Key contributors to this testimony were Kimberley M. Granger-Heath, Susan Aschoff, James Bennett, Ramona Burton, Andrea Dawson, Justin Fisher, Luann Moy, Barbara Steel-Lowney, and Craig Winslow. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Administered by the Administration on Aging (AoA) in the Department of Health and Human Services (HHS), Title III of the Older Americans Act (OAA) is intended to assist individuals age 60 and older by providing supportive services. Title III, Medicaid and Medicare, state, and other sources of funding provide for several types of services, including congregate and home-delivered meals, transportation, and support for caregivers. This testimony reports on ongoing GAO work in preparation for the reauthorization of the OAA and a full report to be issued by GAO in 2011. Based on preliminary findings, GAO describes (1) Title III services most requested by seniors and how state and local agencies reach those most in need, and (2) how agencies have coped with increasing requests in the current economic environment. To do this, GAO reviewed aging plans from the 50 states and District of Columbia; conducted site visits to 4 states; interviewed national, state, and local officials; and analyzed preliminary responses to a Web-based survey of 125 Local Area Agencies on Aging for fiscal year 2009. The survey data used in this document reflect a 54 percent response rate as of July 30, 2010. The survey is still in progress and our results are not generalizable at this time. GAO shared its findings with AoA and incorporated their comments as appropriate. Seniors frequently requested home-delivered meals and transportation services, and based on preliminary responses to GAO's survey and information from site visits, demand for some Title III services may be increasing. Some agencies said they were unable to meet all requests for services in fiscal year 2009. For example, 13 of 67 survey respondents said they were generally or very unable to serve all seniors who requested home-delivered meals, and 15 of 63 said they were generally or very unable to serve all who requested transportation assistance. Local officials cite seniors' desire to remain in their homes as they age, and the economic downturn as possible reasons for increased requests. Given this demand, providers must make decisions about which applicants will receive services. OAA requires providers to target those with the greatest economic and social need,--low-income, minority, lacking proficiency in English, and rural residents--and local officials said they advertise, conduct outreach, and coordinate with other local organizations to identify and serve these groups. Additionally, most local agencies reported screening potential clients to assess level of need, for example, to determine those most at risk of hospitalization due to poor nutrition. In addition to these known service needs, an unknown number of other seniors may need services but not know to contact OAA providers, some officials told GAO. Local agencies who responded to GAO's survey reported using the flexibility afforded by the OAA to transfer funds among Title III programs to meet increased requests for specific services. Twenty-eight of 61 local agencies said they transferred funds in fiscal year 2009, most often removing funds from congregate meals to home-delivered meals or other services. Although the American Recovery and Reinvestment Act (Recovery Act) provided an additional $97 million specifically for meal programs, Title III programs are heavily reliant on state funds, and 44 of 64 local agencies responding to our survey said their state funding was reduced for fiscal year 2010. To cope with funding reductions, some reported cutting services to seniors. Twenty-seven of 65 local agencies said they cut administrative expenses in fiscal year 2010; others relocated offices or left agency positions vacant. Some state and local officials said they provided less service to individuals so that more could get some amount of assistance. Some agencies said they used Recovery Act funds to replace lost state and local funding or created new programs, but the funding was restricted to meal services and was a relatively small percentage of total OAA allocations. The proportion of Americans age 60 and over will continue to grow over the coming decades, and demand for Title III services also will likely grow. Therefore it will be increasingly important for service providers to focus services on those most in need.
|
Under the federal Identity Theft Act, a criminal offense is committed if a person “knowingly transfers or uses, without lawful authority, a means of identification of another person with the intent to commit, or to aid or abet, any unlawful activity that constitutes a violation of Federal law, or that constitutes a felony under any applicable State or local law …” The relevant section of this legislation is codified at 18 U.S.C. § 1028(a)(7)(“fraud and related activity in connection with identification documents and information”). According to an analysis of the new law by the United States Sentencing Commission: Before passage of the 1998 act, the unauthorized use or transfer of identity documents was illegal under title 18 of the U.S. Code, section 1028—which included subsections (a)(1) through (a)(6). The unauthorized use of credit cards, personal identification numbers, automated teller machine codes, and other electronic access devices was illegal under another section of the U.S. Code—that is, 18 U.S.C. § 1029 (“fraud and related activity in connection with access devices”). The addition of subsection (a)(7) to section 1028 expanded the definition of “means of identification” to include such information as SSN and other government identification numbers, dates of birth, and unique biometric data (e.g., fingerprints), as well as electronic access devices and routing codes used in the financial and telecommunications sectors. Under the Identity Theft Act, the new definition of means of identification includes prior statutory definitions of “identification documents.” According to the United States Sentencing Commission, a key impact is to make the proscriptions of the new identity theft law applicable to a wide range of offense conduct, which can be independently prosecuted under numerous existing statutes. That is, any unauthorized use of means of identification can now be charged either as a violation of the new law or in conjunction with other federal statutes. In further elaboration of the breadth of the definition of means of identification and its impact, the Sentencing Commission’s analysis noted the following: The new law covers offense conduct already covered by a multitude of other federal statutes. The unauthorized use of credit cards, for instance, is already prosecuted under 18 U.S.C. § 1029, but now also can be prosecuted under the newly enacted 18 U.S.C. § 1028(a)(7). Other examples of offense conduct include providing a false SSN or other identification number to obtain a tax refund and presenting false passports or immigration documents by using the names and addresses and photos of lawful residents or citizens to enter the United States. In total, according to the Sentencing Commission, the violation of some 180 federal criminal statutes can potentially fall within the ambit of 18 U.S.C. § 1028(a)(7). Regarding state statutes, at the time of our 1998 report, only a few states had specific laws to address identity theft. Now, as table 1 shows, 44 states have specific laws that address identity theft, and 5 other states have laws that cover activities included within the definition of identity theft. Almost one-half (22) of these 49 states enacted relevant laws in 1999. According to FTC’s analysis, identity theft can be a felony offense in 45 of the 49 states that have laws to address this crime. In the view of Justice Department Criminal Division officials, the enactment of state identity theft laws has multi-jurisdictional benefits to all levels of law enforcement—federal, state, and local. In explanation, Justice officials commented that the various state statutes, coupled with the federal statute, provide a broader framework for addressing identity theft, particularly when a multi-agency task force approach is used. The Justice officials noted, for instance, that it is very plausible for a task force to generate multiple cases, some of which can result in federal prosecutions and others in state or local prosecutions. Generally, law enforcement agencies widely acknowledge that SSNs often are used as identifiers by thieves to obtain or “breed” other identification documentation. Through its fraud hotline, SSA/OIG annually receives thousands of allegations of fraud, waste, and abuse. Most of these allegations are classified by SSA/OIG as involving either (1) SSN misuse or (2) program fraud that may contain elements of SSN misuse. In these two categories, SSA/OIG received about 62,000 allegations in fiscal year 1999, about 83,000 allegations in fiscal year 2000, and about 104,000 allegations in fiscal year 2001. SSA/OIG officials explained these two categories of allegations as follows: Allegations of “SSN misuse” include, for example, incidents where a criminal uses the SSN of another individual for the purpose of fraudulently obtaining credit, establishing utility services, or acquiring goods. SSNs are also misused to violate immigration laws, flee the criminal justice system by assuming a new identity, or obtain personal information to stalk an individual. Generally, this category of allegations does not directly involve SSA program benefits. On the other hand, allegations of fraud in SSA programs for the aged, survivors, or disabled often entail some element of SSN misuse. For example, a criminal may use the victim’s SSN or other identifying information for the purpose of obtaining Social Security benefits. When hotline staff receive this type of allegation, it is to be classified under the appropriate category of program fraud. In 1999, SSA/OIG analyzed a sample of SSN misuse allegations and determined that about 82 percent of such allegations related directly to identity theft. The analysis covered a statistical sample of 400 allegations from a universe of 16,375 allegations received by the fraud hotline from October 1997 through March 1999. The analysis did not cover the other category mentioned previously, that is, allegations of program-related fraud with SSN misuse potential. There are no comprehensive statistics on the number of investigations, convictions, or other law enforcement results under the Identity Theft Act. As noted in our March 2002 report, federal law enforcement agencies generally do not have information systems that facilitate specific tracking of identity theft cases. For example, while the amendments made by the Identity Theft Act are included as subsection (a)(7) of section 1028, Title 18 of the U.S. Code, EOUSA does not have comprehensive statistics on offenses charged specifically under that subsection. EOUSA officials explained that, except for certain firearms statutes, staff are required to record cases only to the U.S. Code section, not the subsection or the sub- subsection. Given the absence of comprehensive statistics, we obtained relevant anecdotes or examples of actual investigations and prosecutions under the federal statute. For instance, about 2 years after passage of the Identity Theft Act, a senior Department of Justice official testified at a May 2001 congressional hearing that U.S. Attorneys’ Offices throughout the nation were making substantial use of the new federal law that recognized identity theft as a separate crime. In testimony, the Justice official said that federal prosecutors had used the new statute—18 U.S.C. § 1028(a)(7)—in at least 92 cases to date. One example cited in the testimony involved a defendant who stole private bank account information about an insurance company’s policyholders and used that information to withdraw funds from the accounts of the policyholders and deposit approximately 4,300 counterfeit bank drafts totaling more than $764,000. The case was prosecuted in the Central District of California. The defendant pled guilty to identity theft and related charges and was sentenced to 27 months of imprisonment and 5 years of supervised release. Another case cited by the Justice official illustrates that identity theft crimes can have fact-pattern elements encompassing more than one jurisdiction. The case involved a California resident, who committed fraudulent acts in the state of Washington by, among other means, using a Massachusetts driver’s license bearing the name of an actual person not associated with the criminal activities. Also, this case further illustrates that identity theft is rarely a stand-alone crime; rather, it frequently is a component of one or more white-collar or financial crimes, such as bank fraud, credit card or access device fraud, or wire fraud. Pertinent details of this case, prosecuted in the Western District of Washington, are as follows: Over a period of time in 1999 and 2000, the defendant and other conspirators assumed the identities of third persons without their consent and authorization and fraudulently used the SSNs and names of actual persons. Also, the conspirators created false identity documents, such as state identification cards, driver’s licenses, and immigration cards. Using the identities and names of third persons, the conspirators opened banking and investment accounts at numerous locations and obtained credit cards. The defendant and other conspirators presented and deposited at least 12 counterfeit checks (valued in excess of $1 million) to various banks and investment companies in western Washington. Also, the conspirators purchased legitimate cashiers checks, in nominal amounts, and then altered them to reflect substantially greater amounts. The conspirators presented or deposited at least five altered checks (worth almost $350,000) in the Seattle area. According to Justice, in July 2000, the defendant pled guilty to committing three felony counts of identity theft, conspiring to commit wire fraud involving attempted losses in excess of $1 million, and using an unauthorized credit card. During our current review, Justice Department Criminal Division officials told us that federal prosecutors consider the Identity Theft Act to be a very useful statute. The officials said, for instance, that prosecutors endorse the statute because it provides broad jurisdiction. Further, the Justice officials noted that the Identity Theft Act provides another tool for prosecutors to use, even though in many instances the defendants may be charged under other white-collar crime statutes. The officials explained that identity theft is rarely a stand-alone crime. Thus, cases involving identity theft or identity fraud may have charges under a variety of different statutes relating to these defendants’ other crimes, such as bank fraud, credit card fraud, or mail fraud. Appendix II summarizes selected federal cases prosecuted for such multiple charges, including charges of violations of 18 U.S.C. § 1028(a)(7). As with the federal Identity Theft Act, we found no centralized or comprehensive data on enforcement results under state identity theft statutes. However, officials in selected states provided us with examples of actual cases illustrating the use of such statutes. Also, officials in these states noted various challenges encountered in enforcing identity theft statutes—challenges involving topics such as the filing of police reports, the use of limited resources, and the resolution of jurisdictional issues. The crime of identity theft is not specifically recorded as an offense category in the FBI’s Uniform Crime Reporting (UCR) Program. Further, our inquiries with various national organizations—the National Association of Attorneys General, the National District Attorneys Association, and the International Association of Chiefs of Police— indicated that these entities do not have comprehensive data on arrests or convictions under state identity theft laws. In the absence of national data on enforcement of state identity theft laws, we contacted officials in 10 states—Arizona, California, Florida, Georgia, Illinois, Michigan, New Jersey, Pennsylvania, Texas, and Wisconsin. As table 2 shows, each of these 10 states has a specific statute that makes identity theft a crime and provides for imprisonment of convicted offenders. The length of imprisonment varies by state, ranging upward to as long as 30 years. As with the national organizations we contacted, state officials could not provide aggregate data on law enforcement results (e.g., total number of arrests, prosecutions, or convictions) under their respective state’s identity theft statute. However, the officials were able to provide us with examples of actual cases prosecuted under these statutes. The following sections discuss case examples for three states—California, Michigan, and Texas. Presented for illustration purposes only, these cases are not necessarily representative of identity theft crimes in these or other states. Also, as with federal cases, the state case examples also indicate that identity theft can be a component of other crimes, such as check and credit card fraud, as well as computer-related crimes. Effective January 1, 1998, under section 530.5 of the California Penal Code, any person “who willfully obtains personal information … of another person without the authorization of that person, and uses that information for any unlawful purpose, including to obtain, or attempt to obtain credit, goods, services, or medical information in the name of the person without the consent of that person, is guilty of a public offense.” According to the officials we contacted in California, there is not a centralized source of aggregate or statewide statistics regarding the number of investigations, arrests, or prosecutions under California’s identity theft statute. However, federal law enforcement officials told us that, relative to many other states, the prevalence of identity theft appears to be high in California. The federal officials also commented that new or different types of identity theft schemes often appear to originate on the west coast and then spread east. Regarding identity theft cases handled at the state level, in October 2001, one California deputy attorney general told us that she was handling four active cases, and she commented that these were a “tiny drop in the bucket” in reference to prevalence. Further, she noted that the four active cases had one thing in common, that is, the number of victims was “in the hundreds” or even “never ending.” Also, in October 2001, another California deputy attorney general told us that, at an identity theft conference hosted by the California attorney general in May 2001, two local law enforcement agencies reported thousands of active cases. Specifically, the Los Angeles County Sheriff’s Office reported 2,000 active cases, and the Los Angeles Police Department reported 5,000 active cases. More recently, in March 2002, we contacted the Los Angeles Police Department to obtain updated information. According to the detective supervisor of the Identity Theft and Credit Card Squad, over 8,000 cases of identity theft were reported to the department in calendar year 2001. He estimated that about 70 percent of these identity theft-related cases involved utility or cellular telephone fraud and the other 30 percent involved credit card fraud and check fraud. Further, the detective supervisor said that the department accepts reports of identity theft only if the victim is a resident of Los Angeles. Michigan’s identity theft statute—codified at Mich. Comp. Laws § 750.285—was adopted by the state legislature on December 7, 2000, and became effective April 1, 2001. This new law created a 5-year felony offense for identity theft, making it illegal for a person to obtain or attempt to obtain, without authorization, the “personal identity information” of another person with the intent to use that information unlawfully to (1) obtain financial credit, employment, or access to medical records or information contained in them; (2) purchase or otherwise obtain or lease any real or personal property; or (3) commit any illegal act. One state-level entity that handles investigations and prosecutions of identity theft is the High Tech Crime Unit of the Michigan Department of the Attorney General. This unit deals with computer crimes and crimes committed over the Internet—crimes in which identity theft is often an aspect. According to the Michigan assistant attorney general who serves as Chief of the High Tech Crime Unit, the state’s first criminal prosecution under the 5-year felony statute was initiated by the unit in August 2001. In this case, a woman was charged with stealing personal identity information from her former employer, using that information to apply over the Internet for several credit cards, and making purchases (approximately $1,000) on such cards, without authorization. The woman pled guilty and was sentenced to 1 year probation and required to pay restitution. The Chief also said that, as of June 2002, three other cases were pending under Michigan’s identity theft statute. We also contacted the Office of the Prosecuting Attorney for Oakland County, Michigan. A deputy prosecutor told us that in the approximately 8 months since Michigan’s identity theft statute has been in effect—that is, from April 1, 2001, to the time of our inquiry in early December 2001—one case had been initiated in Oakland County under the statute. This official said that the case, which involved a defendant who had obtained the victim’s personal information and used it to apply for a credit card, was still ongoing in the county’s court system. Texas’ identity theft statute—codified at Texas Penal Code § 32.51— became effective September 1, 1999. Modeled after the federal Identity Theft Act, a person commits the offense of identity theft under Texas’ law if he or she “obtains, possesses, transfers, or uses identifying information of another person without the other person’s consent or with intent to harm or defraud another.” According to officials we contacted in Texas, there is not a centralized source of aggregate or statewide statistics regarding the number of identity theft investigations, arrests, or prosecutions under Texas Penal Code § 32.51 In response to our inquiry, the Internet Bureau of the Texas Attorney General’s Office reported that it had opened 12 identity theft cases during the period September 2000 through August 2001. According to an Internet Bureau official, these cases had resulted in three arrests and indictments, as of November 2001. In one of these cases, a temporary employee of a technology company allegedly stole personal identifying information from the company’s employee database and provided the information to an accomplice, who used the information to apply for bank credit online and collect fees paid by the banks for each application. Reportedly, the scheme affected hundreds of employees. The Internet Bureau official told us that each application using a stolen identity was considered a separate violation and that two suspects had been criminally charged. We also contacted the Dallas County District Attorney’s Office. While the office did not have any readily available statistics on identity theft cases, an assistant district attorney said that the office had handled a variety of identity theft cases, involving check and credit card fraud, as well as fraudulent purchases of vehicles and the acquisition of utility services. The assistant district attorney noted that some of these crimes had been perpetrated by organized rings. One example cited involved a group of three individuals, who made approximately $750,000 in illegal transactions in less than 180 days by using identity fraud coupled with other traditional crimes such as credit card abuse, forgery of commercial instruments, and securing loans through deception. Generally, many of the 10 states’ officials with whom we talked noted various challenges or obstacles to enforcing identity theft statutes. As discussed in the following sections, these challenges involved topics such as the filing of police reports, the use of limited resources, and the resolution of jurisdictional issues. Efforts taken by identity theft victims to file reports with law enforcement agencies are an important first step in being able to investigate such crime. Also, police reports can be useful to consumers who are victims of identity theft and who need to provide documentation of such to creditors and debt collectors. However, FTC data show that 59 percent of the victims who contacted the FTC during a 12-month period (Nov. 1999 through Oct. 2000) had already contacted the police, but 35 percent of these victims reported that they could not get a police report. Partly because identity theft is still a non-traditional crime, some police departments are unaware of the importance of taking reports of identity theft, much less initiating investigations. “… reports of identity theft to local law enforcement agencies are often handled with the response ‘please contact your credit card company,’ and often no official report is created or maintained, causing great difficulty in accounting for and tracing these crimes, and leaving the public with the impression their local police department does not care…” According to FTC staff, even though the association’s resolution is not binding, it sends an important message to police around the country. Also, FTC staff indicated that the same message has been reinforced by FTC staff in numerous law enforcement conferences throughout the nation. FTC data show that 46 percent of the victims who contacted the FTC in calendar year 2001 reported that they had already contacted a police department, and 18 percent of these victims reported that they could not get a police report—which represents a reduction of about half from the percentage of victims who reported being unable to get a police report in the November 1999 through October 2000 period. Despite progress, the importance of police reports is a topic for continuing focus. For example, in January 2002, a Florida study reported that some of the state’s law enforcement agencies “are reluctant to take identity theft complaints and do not generate reports in some cases.” Consequently, the study recommended that “all law enforcement agencies be required to generate a report on identity theft complaints regardless of their subsequent decision on whether or not they will investigate the case.” Also, during our review, a federal official told us that a continuing priority of the Attorney General’s Identity Theft Subcommittee is to help educate local police departments about the critical first step of taking reports from victims of identity theft crime. In this regard, the Secret Service is developing a police training video with the cooperation of the FTC, Department of Justice, and the International Association of Chiefs of Police, which is anticipated to be completed by September 30, 2002. Among other purposes, the training video is to emphasize the importance of police reports in identity theft cases. Officials in several of the 10 states included in our study told us that the level of resources being allocated to investigate and prosecute identity theft often is insufficient. This observation was voiced, for example, by a deputy district attorney in California (Los Angeles County), who told us that there are not enough investigators and prosecutors to handle the county’s identity theft cases. Similar comments were provided to us by a supervisor in the Consumer Fraud Division of the Illinois Cook County State’s Attorney’s Office, which reportedly is the second largest prosecutor’s office in the nation, with over 900 assistant state’s attorneys. In addition to noting that more prosecutors and support staff were needed to effectively combat identity theft, the supervisor commented that funds were needed for training local police agencies how to handle the more complex cases involving multiple victims, multiple jurisdictions, and voluminous documents. Further, a chief deputy attorney in the Philadelphia District Attorney’s Office commented that, given competing priorities and other factors, there is little incentive for police departments in Pennsylvania to allocate resources for investigating identity theft cases. This official said that police departments are more inclined to use their limited resources for investigating violent crimes and drug offenses rather than handling complicated identity theft cases that, even if successfully prosecuted, often lead to relatively light sentences. In explanation, the chief deputy attorney noted the following: Identity theft cases require highly trained investigators, require longer- than-usual efforts, and often end without an arrest. Also, under the state’s identity theft statute, the first offense is a misdemeanor, although identity theft may be a “lesser included offense” with felony charges involving forgery and theft, given that the fact patterns of these crimes may overlap. Even when convictions are obtained, identity theft cases generally do not result in long sentences. For instance, to get a minimum prison term of 1 year for an economic crime in Pennsylvania, a defendant probably would have to steal approximately $100,000. In contrast, a felony drug case conviction involving more than 2 grams of cocaine or heroin—an amount with a street value of about $200—has a mandatory minimum sentence of 1 year of imprisonment. Despite resource and other challenges, the chief deputy attorney said that the Philadelphia District Attorney’s Office does handle identity theft cases. He estimated, for instance, that the office investigated about 100 to 200 identity theft cases in calendar year 2000, and he said these cases represented a “small fraction” of the total number of reported cases in Philadelphia. According to many of the state and local officials we contacted, jurisdiction and venue problems are common in identity theft cases. The officials noted, for instance, that many identity theft cases present cross- jurisdictional issues, such as when a perpetrator steals personal information in one city and uses the information to conduct fraudulent activities in another city or another state. In this regard, an official in one state told us that law enforcement agencies sometimes tend to view identity theft as being “someone else’s problem.” That is, the police department in the victim’s area of residence refer the victim to the police department in another county or state where the perpetrator used the personal information—and, in turn, the remote police department refers the victim back to the area-of-residence police department. To help mitigate this type of problem, some of the states’ identity theft statutes have provisions that permit multiple counties to have jurisdiction. For example, Arizona’s identity theft statute has a provision that allows victims to file reports in any jurisdiction within the state where the theft or related activities arising from the theft occur. Thus, if a credit card is stolen in Phoenix and used in Tempe, the victim may file in either jurisdiction. Similarly, Florida modified its identity theft statute, effective July 1, 2001, to specify that the crime of identity theft can be investigated and prosecuted in the county in which the victim resides or where any element of the crime occurred. Also, during our study, a Wisconsin Department of Justice official told us that consideration was being given to amending Wisconsin’s identity theft law to permit prosecution of such crime in the jurisdiction of the victim’s residence, in addition to any jurisdiction where the stolen personal identity information was fraudulently used. Many federal, state, and local law enforcement agencies have roles in investigating and prosecuting identity theft. Federal agencies include, for example, the FBI, Secret Service, IRS (Criminal Investigation), Postal Inspection Service, and SSA/OIG, as well as U.S. Attorney Offices. However, most identity theft crimes fall within the responsibility of local investigators and prosecutors—such as city police departments or county sheriffs’ offices and county district attorney offices, although state-level agencies, such as state attorney general offices, also have a role. Generally, the prevalence of identity theft and the frequently multi- or cross-jurisdictional nature of such crime underscore the importance of having means for promoting cooperation or coordination among federal, state, and local law enforcement agencies. One such means is the establishment of law enforcement task forces with multi-agency participation. Other relevant means include a coordinating entity (the Attorney General’s Identity Theft Subcommittee) and an information- sharing database (accessible via the FTC’s Consumer Sentinel Network) established with federal leadership. However, as discussed in the following sections, there are opportunities for promoting greater awareness and use of the Consumer Sentinel Network. The use of task forces is perhaps the most commonly used means for promoting cooperation or coordination among law enforcement agencies to address identity theft cases involving multiple jurisdictions. A main advantage of task forces, according to Secret Service officials, is that the pooling of resources and expertise results in more thorough investigations and better continuity from inception of the investigations through prosecution. The officials also noted that improved interagency relationships result in the sharing of investigative leads, bridging of jurisdictional boundaries, and avoiding duplication of efforts. Regarding the views of state officials, a California deputy attorney general, who was working on a task force that included federal and local law enforcement agencies, told us that this approach simplified all aspects of multi- jurisdictional issues, particularly given that each agency has its own “go to” person. Generally, task forces can have participating agencies from all levels of law enforcement—federal, state, and local—and may also have private sector representation. The following sections provide examples of task forces developed by federal (Secret Service) and state (California and Florida) leadership, respectively. The scope of our work did not include assessing the effectiveness of these task forces. At the time of our review, the Secret Service was the lead agency in 38 task forces across the country that were primarily targeting financial and electronic crimes—categories of crimes that frequently have identity theft- related elements. According to the Secret Service, electronic crimes task forces concentrate on crimes involving e-commerce, telecommunications fraud, and computer intrusions (hacking), as well as cases involving missing and exploited children. An identity theft-related example is an investigation initiated in December 2000 by the electronic crimes task force of the Secret Service’s New York Field Office. According to Secret Service testimony presented in May 2001 at a congressional hearing: The investigation, which was conducted jointly by the Secret Service and the New York Police Department, determined that the credit card accounts of many of the nation’s wealthiest chief executive officers, as well as many other citizens, had been compromised. Using the Internet and cellular telephones, the perpetrators obtained the victims’ credit card account numbers and then established fictitious addresses to conduct fraudulent transactions. Also, the perpetrators attempted to transfer approximately $22 million— from the legitimate brokerage and corporate accounts of the victims—into fraudulently established accounts for conversion to the perpetrators’ own use. Table 3 presents an example of another Secret Service electronic crimes task force, which was first developed in 1995 by the agency’s Washington (District of Columbia) Field Office and has subsequently grown to include a total of 32 participating law enforcement agencies and private sector entities. Secret Service officials said that the agency’s task forces generate cases that result in prosecutions in state and local courts as well as in federal courts. The officials estimated, for instance, that the majority (about 60 percent) of the Washington Field Office Task Force’s cases had been prosecuted in state courts. Further, regarding the operations of Secret Service task forces in general, the officials noted that, while the Secret Service may have overall administrative responsibility, the role of “quarterback” regarding the investigative agenda often is a shared role. In explanation, the officials said that the task forces do get involved in cases important to the needs of local communities. In the mid-1990s, the California Attorney General’s Office established five regional task forces in the state to facilitate multi-jurisdictional investigations and prosecutions of high-technology crimes, such as the theft of chips and other computer components. The five high-technology task forces also are to address identity theft/fraud and its related crimes. One of the five is the Sacramento Valley High-Technology Crime Task Force, which was reorganized in October 1999 as a separate division within the Sacramento County Sheriff’s Department. The task force includes participants from local, state, and federal agencies in the 34 counties of the eastern judicial district of the state of California. As of calendar year 2001, a total of 32 agencies or entities were represented, as table 4 shows. According to its annual report for calendar year 2001, the Sacramento Valley High-Technology Crimes Task Force investigated 153 cases involving identity theft. Examples of these cases included the following: Detectives were called to the Sacramento International Airport to investigate a suspect who used stolen credit card information to purchase tickets for two other suspects. The investigation revealed 24 other victims whose credit cards had been stolen by one of the suspects from his place of employment. A suspect attempted to purchase items at a store using a manufactured fraudulent check. After being arrested, the suspect identified herself using another person’s identity and was booked into jail using that name. However, an investigation determined the suspect’s true identity and that she had written at least seven other fraudulent checks in the Sacramento area. A suspect used a victim’s identity to open an account at a jewelry store and charge several items. Also, the suspect opened several other accounts in the victim’s name and made purchases (some over the Internet) using these accounts. Further, the investigation found numerous names, credit information, SSNs, and driver’s licenses—and documents with Internet Web sites, passwords, and personal identification numbers—indicating that the suspect had opened accounts using the personal information of the victims. Identity theft-related enforcement efforts in Florida are being led by the Florida Attorney General’s Office of Statewide Prosecution and the Florida Department of Law Enforcement. In 2001, these agencies partnered to create a statewide task force initiative to target perpetrators of identity fraud. The initiative—called Operation LEGIT (law enforcement getting identity thieves)—has special agents and other personnel assigned from various regional offices of the Florida Department of Law Enforcement. Other task force participants can include local and federal law enforcement agencies, as indicated in the following examples of cases: For more than 12 years, a Florida suspect assumed and lived under the identity of a California victim, who had lost his wallet (with his driver’s license and other personal identification information) while vacationing in Daytona Beach in 1987. Since that time, the suspect had purchased and sold homes, opened bank accounts, obtained credit, established utility and phone service, and been arrested on at least three separate occasions. Based on a Florida warrant, the victim was wrongly arrested in California and held in jail for more than a week. Also, the victim has had civil judgments levied against him. The investigation that led to the suspect’s arrest was initiated in May 2001 and was conducted by the Hernando County (Florida) Sheriff’s Office, the Florida Department of Law Enforcement, the Office of Statewide Prosecution, and SSA/OIG. In July 2001, six suspects were charged with racketeering and multiple counts of identity theft that affected victims throughout Florida. The ringleader orchestrated the scheme from a Florida prison (Gulf County Correctional Facility), where he was serving a 9-year sentence for his involvement in a similar investigation that concluded in 1998, with victims throughout Florida and Georgia. Using the inmate telephone system and the U.S. mail service, the ringleader obtained account and identity information of unsuspecting consumers. Accomplices used the compromised identities to commit credit card fraud, purchase vehicles, open fraudulent checking accounts, and apply for instant loans at furniture stores and other businesses across Florida. The organized scheme netted the ring more than $200,000 in stolen property. This case was investigated by the Florida Department of Law Enforcement, the Office of Statewide Prosecution, and SSA/OIG. In October 2001, six suspects were arrested for fraudulently obtaining nearly $300,000 in merchandise, after assuming the identities of 18 individuals from around the country. An employee of a children’s clinic in Orlando obtained the SSNs and other identifying information of the 18 individuals, who had participated in a medical study concerning cystic fibrosis and whose children suffer from the disease. The employee passed the information to another person, who created false birth certificates and other documents that were used to obtain identity cards in the names of the victims through offices of the Florida Department of Motor Vehicles. The suspects used the false identities to obtain instant credit at electronic and furniture stores in Orange and Seminole Counties in Florida. The suspects purchased big-screen televisions, computers, and other high-cost items until the victims’ credit lines were exhausted. The purchased items were later sold on the streets of Orlando (Florida) and Chicago (Illinois) for half their retail value, with the proceeds divided by the suspects. The investigation was conducted by the Orlando Police Department, the Florida Department of Law Enforcement, and the Office of Statewide Prosecution. In February 2002, a former resident of Daytona Beach was charged with obtaining personal identifying information (names, addresses, and SSNs) on various individuals and using the information to fraudulently purchase more than $35,000 worth of merchandise throughout east-central Florida. The suspect obtained the information from a Web site used legitimately by a variety of businesses and individuals for the purpose of finding and tracking others. As of February 2002, the then-ongoing investigation by the Florida Department of Law Enforcement revealed that the suspect had compromised the identities of victims in 12 states. In early 1999, following passage of the federal Identity Theft Act in 1998, the U.S. Attorney General’s Council on White Collar Crime established the Subcommittee on Identity Theft to foster coordination of investigative and prosecutorial strategies and promote consumer education programs. Subcommittee leadership is vested in the Fraud Section of the Department of Justice’s Criminal Division, and membership includes various federal law enforcement and regulatory agencies, as well as state and local representation through the International Association of Chiefs of Police, the National Association of Attorneys General, and the National District Attorneys Association. Appendix III lists the membership of the subcommittee. In response to our inquiries, the Chairman of the subcommittee said that, although there is no written charter or mission statement, the role and activities of the subcommittee are substantially as follows: Initially, to promote awareness and use of the federal Identity Theft Act, the subcommittee prepared guidance memorandums for field distribution to law enforcement and regulatory agencies. Also, the subcommittee helped to plan or support various identity theft-related educational presentations and workshops, with participants from the public and private sectors. Because so much of identity theft is a local matter, it was imperative that the subcommittee’s membership include state and local representatives. Participation by the International Association of Chiefs of Police gives the subcommittee a channel to thousands of local law enforcement entities. A continuing priority of the subcommittee is to help educate local police departments about the critical first step of taking reports from victims of identity theft crime. Furthermore, the subcommittee continually promotes the availability of FTC’s Consumer Sentinel Network as a tool for federal, state, and local law enforcement agencies to use. The subcommittee Chairman also noted that, since the terrorist incidents of September 11, 2001, there has been more of a focus on prevention. For example, the American Association of Motor Vehicle Administrators attended a recent subcommittee meeting to discuss ways to protect against counterfeit or fake driver’s licenses. To obtain a broader understanding of the subcommittee’s role, as well as ways to potentially enhance that role, we contacted the designated individuals who, respectively, represented six member organizations— FBI, National District Attorneys Association, Postal Inspection Service, Secret Service, Sentencing Commission, and SSA/OIG. Generally, the representatives commented that the subcommittee has been helpful in combating identity theft and has been functioning well, particularly considering the fact that membership is a collateral duty for each representative. One member—representing the National District Attorneys Association—suggested that the subcommittee’s role could be enhanced by having a formal charter or mission statement detailing each participant’s role. However, the FBI and Secret Service representatives said that the informality of the subcommittee promotes member participation and also commented that additional directives could be counterproductive. Since its establishment in 1999, FTC’s Identity Theft Data Clearinghouse has been used for reporting statistical and demographic information about victims and perpetrators. While not immediate, the value of the Clearinghouse database as a law enforcement tool has been growing but has not reached its full potential. In conducting investigations, for example, relatively few law enforcement agencies have used FTC’s Consumer Sentinel Network, which provides computer access to the Clearinghouse database. Further, centralized analysis of database information to generate investigative leads and referrals has been limited. Law enforcement’s limited use of the Consumer Sentinel Network and the Clearinghouse database may be due to various reasons, including the relatively short operating history of the database. To promote greater awareness and use of the Network and the Clearinghouse database, FTC and Secret Service outreach efforts include conducting regional law enforcement training seminars and developing a training video for distribution to local law enforcement agencies across the nation. The federal Identity Theft Act of 1998 required FTC to “log and acknowledge the receipt of complaints by individuals who certify that they have a reasonable belief” that one or more of their means of identification have been assumed, stolen, or otherwise unlawfully acquired. In response to this requirement, in November 1999, FTC established the Identity Theft Data Clearinghouse to gather information from any consumer who wishes to file a complaint or pose an inquiry concerning identity theft. Consumers can call a toll-free telephone number (1-877-ID-THEFT) to report identity theft. Information from complainants is accumulated in a central database (the Identity Theft Data Clearinghouse) for use as an aid in law enforcement and prevention of identity theft. From its establishment in November 1999 through September 2001, the Clearinghouse received a total of 94,100 complaints from identity theft victims. This total includes 16,784 complaints transferred to the FTC from the SSA/OIG. In the first month of operation, the Clearinghouse answered an average of 445 calls per week. By March 2001, the average number of calls had increased to over 2,000 per week. In December 2001, the weekly average was about 3,000 answered calls. “The Clearinghouse database has been in operation for more than two years. … While not comprehensive, information from the database can reveal information about the nature of identity theft activity. For example, the data show that California has the greatest overall number of victims in the FTC’s database, followed by New York, Texas, Florida, and Illinois. On a per capita basis, per 100,000 citizens, the District of Columbia ranks first, followed by California, Nevada, Maryland and New York. The cities with the highest numbers of victims reporting to the database are New York, Chicago, Los Angeles, Houston, and Miami. “Eighty-eight percent of victims reporting to the FTC provide their age. The largest number of these victims (28%) were in their thirties. The next largest group includes consumers from age eighteen to twenty-nine (26%), followed by consumers in their forties (22%). Consumers in their fifties comprised 13%, and those age 60 and over comprised 9%. Minors under 18 years of age comprised 2% of victims. … “Thirty-five percent of the victims had not yet notified any credit bureau at the time they contacted the FTC; 46% had not yet notified any of the financial institutions involved. Fifty- four percent of the victims had not yet notified their local police department of the identity theft. By advising the callers to take these critical steps, we enable many victims to get through the recovery process more efficiently and effectively.” In addition to providing a basis for reporting statistical and demographic information about identity theft victims and perpetrators, another primary purpose of the Clearinghouse database is to support law enforcement. Since May 2001, one Secret Service special agent, working with an FTC attorney, an investigator, and a paralegal, has been involved in centrally analyzing Clearinghouse data to generate investigative leads and referrals. Specifically, according to FTC staff: The team uses intelligence software to analyze Clearinghouse data to generate investigative leads. These leads are then further developed using criminal investigative resources provided by the Secret Service and research and analytical tools provided by the FTC. When the case leads have been comprehensively developed, they are referred to federal, state, or local law enforcement officers in the field. These officers participate in financial, high-tech, or economic crimes task forces and are well equipped to handle the cases. The pace of developing and sending out investigative leads has picked up since FTC and the Secret Service jointly initiated their efforts in May 2001. For instance, 10 investigative referrals were made to regional law enforcement during the last 6 months of calendar year 2001, whereas 19 referrals were made in the first 5 months of 2002. One of the 29 referrals involved 10 individuals with the same address. In response to our inquiries in May 2002, Secret Service officials said that the 29 referrals were still being worked and, thus, the results or outcomes were yet to be determined. In addition to receiving referrals based on centralized analysis of Clearinghouse data, federal, state, and local law enforcement agencies nationwide can use desktop computers to access Clearinghouse data to further support ongoing cases or develop new leads. Specifically, through FTC’s Consumer Sentinel Network—which is a secure, encrypted Web site—law enforcement agencies can access Clearinghouse data and use search tools tailored for identity theft investigations. For instance, an investigator may scan consumer complaints matching certain criteria to determine if there is a larger pattern of criminal activity. FTC does not charge a fee for use of the Consumer Sentinel Network. However, each law enforcement agency must enter into a confidentiality agreement (pledging to abide by applicable confidentiality rules) with FTC. As of May 24, 2002, a total of 46 federal agencies had signed user agreements with FTC, facilitating access to Identity Theft Data Clearinghouse information via the Consumer Sentinel Network. These agencies include the FBI, Secret Service, Postal Inspection Service, SSA/OIG, and some U. S. Attorney Offices. Further, relatively few of the nation’s over 18,000 state and local law enforcement agencies have signed agreements with FTC to use the Consumer Sentinel Network to access the Identity Theft Data Clearinghouse. Specifically, as of May 24, 2002, a total of 306 state and local law enforcement agencies had entered into such agreements. Of this total, the number of users varied from 1 law enforcement agency in each of 5 states (Delaware, Hawaii, Idaho, New Hampshire, and New Mexico) and 2 agencies in each of 8 other states (Arizona, Arkansas, Kansas, Massachusetts, Nebraska, Oregon, South Dakota, and Wyoming) to 17 agencies in Texas and 45 agencies in California. Even at the high end of this range, the extent of access is not comprehensive. For example: In Texas, the Houston Police Department and the Harris County Sheriff’s Office—jurisdictions that encompass about 22 percent of the state’s population—are not users of the Consumer Sentinel Network. As stated previously, in reference to number of identity theft victims, Houston is among the top five cities nationally. Overall, less than 1 percent of the state’s law enforcement agencies have entered into confidentiality agreements with FTC. Although California has the largest number of users (45 agencies), the list of subscribers does not include the city police departments in Los Angeles, Sacramento, or San Jose. As mentioned previously, over 8,000 cases of identity theft were reported to the Los Angeles Police Department in calendar year 2001. According to FTC staff, the number of Consumer Sentinel member agencies continually increases, particularly in response to outreach activities such as regional law enforcement training. Appendix IV gives a full listing of the 352 agencies that had entered into user agreements with FTC, as of May 24, 2002. FTC staff provided us query statistics showing external law enforcement usage of the Consumer Sentinel Network and the Identity Theft Data Clearinghouse for January 2001 through March 2002. During this 15-month period, the number of external law enforcement queries about identity theft complaints totaled 7,946—an average of about 530 per month—and ranged from 378 in December 2001 to 783 in January 2002. FTC staff noted that these usage statistics do not reflect centralized analysis of identity theft complaint data, conducted jointly by the Secret Service and FTC. Various reasons may explain law enforcement’s relatively limited use of the Consumer Sentinel Network and the Identity Theft Data Clearinghouse database. Department of Justice officials said, for instance, that many state and local agencies may have an insufficient number of computers and support personnel, in addition to being challenged by competing priorities. Also, FTC staff and Secret Service officials noted that the availability of the Clearinghouse database as an aid for law enforcement agencies is still relatively new. As such, some potential users are unaware of this investigative resource, despite ongoing outreach efforts. Further, regarding usefulness of database information for law enforcement purposes, we asked whether any examples of federal, state, or local success stories had been presented or discussed at any of the monthly meetings of the Attorney General’s Identity Theft Subcommittee. In response, the head of the subcommittee told us that none of the meetings had included such examples—neither examples involving field agencies that used the Consumer Sentinel Network to develop cases nor examples involving the results of investigative leads or referrals that were based on centralized analysis of Clearinghouse data. One state’s deputy attorney general, in replying to our inquiry about the usefulness of the Consumer Sentinel Network and the Clearinghouse database, said that, as a practical matter, a local investigator with numerous outstanding cases on his or her desk will not be using the FTC system to obtain more cases. Rather, this state official suggested, for example, that FTC could use the system to generate periodic reports to alert law enforcement of specific problems within their respective jurisdictions and facilitate the coordination of investigative resources for the maximum benefit. FTC staff acknowledged that Sentinel members appear to use the Clearinghouse database to bolster the cases they have under investigation more often than to initiate new cases. However, the FTC staff told us that they are continuously looking for ways to make the Clearinghouse database more efficient and user friendly. The staff noted, for example, that FTC has established an e-mail address to take requests for specific searches from Sentinel members and, thereby, FTC can use its internal search tools to query the Clearinghouse database and provide more comprehensive results to requesters. Also, the staff noted that FTC expects to implement an “alert” function before the end of fiscal year 2002. According to the staff: The alert function will enable a Clearinghouse user (e.g., police officer) to flag or annotate one or more particular complaints relating to an investigation that the user is conducting. If and when another user executes a query that retrieves one of the flagged complaints, this second user will get a pop-up message box asking him or her to contact the first user before proceeding. Thus, two police officers, who likely are from different jurisdictions but are looking at the same complaint records, can avoid duplicating investigatory efforts or inadvertently impeding each other’s investigations. Also, the staff noted that FTC has plans to implement (by the end of fiscal year 2002) a report listing the suspect locations most frequently reported in the database. Further, in response to requests from Sentinel members, the FTC will soon begin testing a program to provide Sentinel members access to electronic batches of Clearinghouse data—for example, all complaint information reported by victims in a given city during a specified period of time. According to FTC staff, Sentinel members will be able to run the batched data through their own intelligence or link analysis software and also combine the data with their own investigative information for more impact. Moreover, FTC staff said that additional steps are being taken to increase law enforcement agencies’ awareness and use of the Consumer Sentinel Network and the Clearinghouse database. The staff noted, for example, that training sessions for law enforcement agencies were conducted in Washington, D.C., in March 2002, in Des Moines, Iowa, and Chicago, Illinois, in May 2002, and that additional sessions are planned for San Francisco, California, in June 2002, and for Dallas, Texas, in August 2002. Also, as mentioned previously, the Secret Service is developing a police training video with the cooperation of the FTC, Department of Justice, and the International Association of Chiefs of Police, which is anticipated to be completed by September 30, 2002. According to FTC staff and Secret Service officials, the training video will briefly discuss the availability of the Consumer Sentinel Network and the Identity Theft Data Clearinghouse, in addition to emphasizing the importance of police reports in identity theft cases. These planned initiatives appear to be steps in the right direction. If implemented effectively, the initiatives should help to ensure that more law enforcement agencies are aware of existing data that can be used to combat identity theft. Nonetheless, concerted and continued outreach efforts will be needed to promote broad awareness and use of the Consumer Sentinel Network and the Clearinghouse database by all levels of law enforcement. As mentioned previously, SSA/OIG’s fraud hotline annually receives tens of thousands of allegations, most of which involve either (1) SSN misuse or (2) program fraud with SSN misuse potential. In these 2 categories, SSA/OIG received approximately 62,000 allegations in fiscal year 1999, and the agency opened investigative cases on 4,636 (about 7 percent) of these allegations. About three in four of the investigative cases involved program fraud-related allegations. Generally, SSA/OIG concentrates its investigative resources on this category of allegations because the protection of Social Security trust funds is a priority. SSA/OIG statistics for investigative cases opened in fiscal year 1999 indicate that a total of 1,347 cases had resulted in criminal convictions or other judicial actions, as of April 30, 2002. During our review, the SSA Inspector General told us that his office does not have enough investigators to address all of the SSN misuse allegations received on the agency’s fraud hotline. However, FTC staff noted that, starting in February 2001, FTC began to routinely upload information from SSA/OIG’s fraud hotline about these allegations into FTC’s Identity Theft Data Clearinghouse, thereby making the information available to law enforcement agencies via the Consumer Sentinel Network. Within the categories of SSN misuse and program fraud with SSN misuse potential, SSA/OIG received a total of 62,376 allegations in fiscal year 1999, a greater number (83,721) in fiscal year 2000, and an even higher number (104,103) in fiscal year 2001. According to SSA/OIG officials, allegations are reviewed by supervisory personnel to determine which should be further pursued. The review criteria, among others, include considerations of the credibility of the alleged information, the actual or potential dollar- loss amounts involved, the severity of other effects on SSA programs, and the prosecutive merits of the allegation, as well as considerations of current workloads and the availability of investigative resources. Most allegations of identity theft made to SSA/OIG do not result in criminal investigations being opened. Of the two categories of allegations, however, SSA/OIG generally concentrates its investigative resources on allegations of program fraud with SSN misuse potential because the protection of Social Security trust funds is a priority. In fiscal year 1999, for example, SSA/OIG opened investigative cases on 12 percent of the allegations categorized as program fraud with SSN misuse potential and 3 percent of the allegations categorized as SSN misuse (see table 5). In other words, although the total numbers of allegations received in each category were similar, program fraud-related allegations were about four times more likely to result in investigative cases being opened. In response to our inquiry regarding the results of SSA/OIG criminal investigations, the agency provided us statistics for applicable cases opened in fiscal year 1999 that resulted in criminal or other judicial actions. As table 6 shows, as of April 30, 2002, SSN misuse cases (768) accounted for 57 percent of the 1,347 investigations involving SSN misuse or program fraud with SSN misuse potential that were opened in fiscal year 1999 and resulted in criminal or other judicial actions. SSA/OIG officials said that investigations of SSN misuse allegations produce convictions or other criminal results because SSN misuse generally is tied to other white-collar or financial crimes that can have identity theft-related elements. On the other hand, the officials said that many investigations of program fraud cases may be closed with administrative actions, which can include suspension of benefit payments. In recent years, the number of SSN misuse allegations received by the SSA/OIG has grown faster than the number of program fraud-related allegations. That is, SSN misuse allegations constitute a growing proportion of these two categories of allegations, increasing from 48 percent in fiscal year 1999, to 56 percent in fiscal year 2000, and to 63 percent in fiscal year 2001. During our review, the SSA Inspector General told us that, given limited resources and competing priorities, his office investigates relatively few allegations of SSN misuse. Consequently, the Inspector General said that many credible allegations of identity theft that have the potential to produce criminal convictions or other judicial actions are not addressed. Starting in February 2001, FTC began routinely uploading SSA/OIG information about SSN misuse allegations into FTC’s Identity Theft Data Clearinghouse. This enhancement of the Clearinghouse database makes the SSA/OIG allegation information available to law enforcement agencies via the Consumer Sentinel Network. However, as discussed previously, relatively few law enforcement agencies use the Network, and centralized analysis of Clearinghouse data to generate investigative leads and referrals has been limited. Comprehensive results—such as number of prosecutions and convictions—under the federal Identity Theft Act and relevant state statutes are not available. However, examples of actual cases illustrate that identity theft often is a component of other white-collar or financial crimes, and these cases often have fact-pattern elements involving more than one jurisdiction. Moreover, the prevalence of identity theft and the frequently multi- or cross-jurisdictional nature of such crimes underscore the importance of leveraging available resources and promoting cooperation or coordination among all levels of law enforcement. Our review indicates that there are opportunities for law enforcement to make greater use of existing data to combat identity theft. In particular, the Consumer Sentinel Network potentially can provide all law enforcement agencies across the nation with access to FTC’s Identity Theft Data Clearinghouse database to support ongoing investigations. In addition to complaint information reported by identity theft victims directly to FTC, the Clearinghouse database now routinely incorporates identity theft-related information received by SSA/OIG. However, despite outreach efforts to date, relatively few state and local law enforcement agencies have signed Consumer Sentinel confidentiality agreements with FTC. Also, although the number is increasing, few investigative leads and referrals have been generated by centralized analysis of database information. Given the growing prevalence of identity theft, continued and concerted emphasis is warranted regarding the availability and use of the Consumer Sentinel Network and the Clearinghouse database as law enforcement tools. We recommend that the Attorney General have the Identity Theft Subcommittee promote greater awareness and use of the Consumer Sentinel Network and the Identity Theft Data Clearinghouse by all levels of law enforcement—federal, state, and local. On June 5, 2002, we provided a draft of this report for comment to the Departments of Justice and the Treasury, FTC, and SSA. The Department of Justice generally agreed with the substance of the report and recommendation that the Identity Theft Subcommittee promote greater awareness and use of the Consumer Sentinel Network and the Identity Theft Data Clearinghouse by all levels of law enforcement. Further, Justice noted several actions that it has taken or will take to directly address the recommendation. These actions include, for example, regional training seminars cosponsored by Justice, FTC, and the Secret Service that have specific components about the Consumer Sentinel and the identity theft database. Justice noted that five training seminars have been or are planned for this fiscal year and that additional seminars are being considered for fiscal year 2003. Also, Justice said that the state and local law enforcement representatives on the Identity Theft Subcommittee will be consulted regarding additional mechanisms for informing police departments and sheriffs’ offices about the Consumer Sentinel. Further, Justice cited its efforts to inform the public about identity theft and ensure that courts are meting out appropriate criminal sanctions. The full text of Justice’s comments is reprinted in appendix VI. The Secret Service, a component agency of the Department of the Treasury, said that the draft report accurately presented the agency’s positions. Also, the Secret Service commented that the agency’s liaison to the FTC attended 33 speaking engagements from May 2001 to May 2002 to promote the Identity Theft Data Clearinghouse and that a similar schedule is anticipated for the next 12 months. Furthermore, the Secret Service noted that the FTC—in conjunction with the Secret Service liaison, Justice, and the International Association of Chiefs of Police—plans to sponsor at least six training seminars in fiscal year 2003. Justice and the Secret Service also provided various technical comments and clarifications, which have been incorporated in this report where appropriate. Similarly, the FTC and SSA provided technical comments and clarifications, which have been incorporated where appropriate. In sum, we believe that the ongoing and planned efforts cited by the Department of Justice and the Secret Service are responsive to the recommendation that we make in this report. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested congressional committees and subcommittees; the Attorney General; the Secretary of the Treasury; the Chief Postal Inspector, U.S. Postal Inspection Service; the Commissioner, SSA; and the Chairman, FTC. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or Danny R. Burton at (214) 777-5600. Other key contributors are acknowledged in appendix VII. In response to a request from Representative Sam Johnson, we developed information on the following topics: Law enforcement results (such as examples of prosecutions and convictions) under the federal Identity Theft and Assumption Deterrence Act of 1998 (the “Identity Theft Act”). Law enforcement results under state statutes that, similar to the federal act, provide state and local law enforcement officials with the tools to prosecute and convict identity theft criminals. The means used to promote cooperation or coordination among federal, state, and local law enforcement agencies in addressing identity theft crimes that span multiple jurisdictions. Actions taken by the Social Security Administration’s Office of the Inspector General (SSA/OIG) to resolve Social Security number (SSN) misuse and other identity theft-related allegations received during fiscal year 1999. The following sections discuss the scope and methodology of our work in addressing the respective topics. To determine what have been the law enforcement results under the federal Identity Theft Act, we contacted various federal agencies responsible for investigating and prosecuting this type of crime. Specifically, we interviewed responsible officials and reviewed documentation obtained from the Department of Justice’s Criminal Division, the Executive Office for United States Attorneys (EOUSA), the Federal Bureau of Investigation (FBI), the Postal Inspection Service, the Secret Service, and SSA/OIG. We reviewed available statistics on number of investigations and prosecutions and obtained examples of actual investigations and prosecutions under the federal statute. Also, we conducted a literature search to identify studies, reports, or other products—including congressional testimony statements—giving examples of cases or other results under the federal Identity Theft Act. In February 2002, we conducted a search of the LexisNexis database. Our search was designed to retrieve only those identity theft cases that specifically mentioned the federal statute—that is, cases that cited 18 U.S.C. § 1028(a)(7). We summarized the results of selected cases prosecuted under this statute. Our summary (see app. II) is not intended to be a comprehensive listing of all federal prosecutions under the 1998 federal statute. We contacted the Federal Trade Commission (FTC) to determine which states had enacted specific laws related to identity theft. To determine the availability of any national overview information regarding law enforcement results under the states’ identity theft laws, we reviewed the offense categories included in the FBI’s Uniform Crime Reporting (UCR) Program, and we contacted the National Association of Attorneys General, the National District Attorneys Association, and the International Association of Chiefs of Police. For more detailed inquiries, we selected 10 states—Arizona, California, Florida, Georgia, Illinois, Michigan, New Jersey, Pennsylvania, Texas, and Wisconsin. We judgmentally selected these states on the basis of having the highest incidences of reported identity theft or the longest-standing applicable statutes. Specifically, with one exception (New York), we selected each state that had more than 2,500 complaints reported to FTC during November 1999 through September 2001 (see table 8). Also, some of the first states to enact identity theft laws were Arizona (1996), California (1997), and Wisconsin (1997). As indicated in table 7, the 10 states we selected represent about 51 percent of the total number of complaints received by the FTC during November 1999 through September 2001. In each of the 10 selected states, we attempted to contact officials in the state’s attorney general’s office and in at least one local jurisdiction (e.g, a county district attorney’s office). We developed a structured data collection instrument and distributed it to each of these officials. The instrument was used to obtain information about the respective state’s specific identity theft statute, implementation activities, relevant investigative and prosecutorial units, reports or records of statistical results, examples of actual cases, and observations on the usefulness or effectiveness of the statute. With the exception of Arizona, the attorney general’s office in each of the 10 selected states responded to our inquiries. Also, at least one local official in each of the 10 states except Georgia responded to our inquiries. Given the limited distribution of our data collection instrument, the observations of the respondents cannot be viewed as being representative of the entire law enforcement community in the respective state. Table 8 lists the agencies we contacted in each of the 10 selected states. Our literature search and discussions with federal and state law enforcement officials indicated that three principal means are used to promote cooperation or coordination among all levels of law enforcement in addressing identity theft crimes—law enforcement task forces with multi-agency participation, the Attorney General’s Identity Theft Subcommittee, and FTC’s Consumer Sentinel Network and Identity Theft Data Clearinghouse database. We obtained examples of task forces established by federal (Secret Service) and state (California and Florida) leadership, respectively. The scope of our work did not include assessing the effectiveness of these task forces. Regarding the Identity Theft Subcommittee, we interviewed the Chairman—a leadership role vested in the Fraud Section of the Department of Justice’s Criminal Division—to obtain an overview of the subcommittee’s role, membership, activities, and accomplishments. For the most part, in studying the subcommittee’s role, we relied on testimonial rather than documentary evidence. According to the Chairman, there are no minutes of the subcommittee’s monthly meetings because the subcommittee is not an “advisory” entity as defined in applicable sunshine laws. Also, the Chairman said that the subcommittee has not produced any annual reports of its activities. To obtain a broader understanding of the subcommittee’s role, as well as ways to potentially enhance that role, we contacted the designated individuals who, respectively, represented six member organizations— FBI, National District Attorneys Association, Postal Inspection Service, Secret Service, Sentencing Commission, and SSA/OIG. Various representatives offered suggestions for ways to potentially enhance the subcommittee’s role. These suggestions do not necessarily reflect the consensus views of either the full subcommittee or the seven representatives we contacted. Also, the structured data collection instrument that we distributed to law enforcement officials in the 10 selected states included a question about the role, usefulness, and effectiveness of the Identity Theft Subcommittee. As previously mentioned, given the limited distribution of the data collection instrument, the observations of the respondents cannot be viewed as being representative of the entire law enforcement community in the respective state. Regarding the Consumer Sentinel Network and the Identity Theft Data Clearinghouse database, we interviewed responsible FTC staff and reviewed available documentation, including law enforcement usage statistics for January 2001 through March 2002. We reviewed the list of federal, state, and local law enforcement agencies that, as of May 24, 2002, had entered into user agreements with FTC, pledging to abide by applicable confidentiality rules when using the Consumer Sentinel Network to access the Clearinghouse database. Regarding usefulness of database information for law enforcement purposes, we asked the Identity Theft Subcommittee Chairman for examples (if any) of federal, state, or local success stories that had been presented or discussed at the subcommittee’s monthly meetings. We discussed with FTC staff the extent to which Clearinghouse data have been centrally analyzed to generate investigative leads and referrals. Further, we inquired about FTC’s plans for making the Clearinghouse database more useful for law enforcement purposes. Also, the structured data collection instrument that we distributed to law enforcement officials in the 10 selected states included a question about the usefulness of the Consumer Sentinel Network and the Clearinghouse database. To reiterate, given the limited distribution of the data collection instrument, the observations of the respondents cannot be viewed as being representative of the entire law enforcement community in the respective state. To obtain information about actions taken to resolve SSN misuse and other identity theft-related allegations, we contacted officials from the various components of SSA/OIG, including officials from the Office of Investigations, the Office of Executive Operations, as well as the Counsel to the Inspector General. We focused primarily on allegations received during fiscal year 1999. However, to provide a trend perspective and more currency, an official from the SSA/OIG’s Office of Executive Operations provided us annual allegation data for fiscal years 1998 through 2001. To determine the criteria used to establish which allegations are selected for criminal investigation, we spoke with staff from the Office of Investigation’s Allegation Management Division, which operates SSA/OIG’s fraud hotline. Also, officials from SSA’s Office of Executive Operations provided us statistical information detailing the number of criminal investigations that resulted from program fraud-related allegations and the number that resulted from SSN misuse allegations that did not involve SSA programs. Information was also provided on how many of these criminal investigations produced a criminal result, such as a fugitive felon being apprehended or an individual being convicted and sentenced. This appendix summarizes selected federal cases prosecuted under the Identity Theft and Assumption Deterrence Act of 1998. The relevant section of this legislation is codified at 18 U.S.C. § 1028(a)(7)(“fraud and related activity in connection with identification documents and information”). The cases summarized in this appendix are not intended to be a comprehensive listing of all federal prosecutions under the 1998 federal statute. As mentioned in appendix I, we identified these cases by conducting a search of the LexisNexis database in February 2002. Our search was designed to retrieve only those identity theft cases that specifically mentioned the federal statute—that is, cases that cited 18 U.S.C. § 1028(a)(7). The following summaries of five cases prosecuted in U.S. district courts illustrate that identity theft generally is not a stand-alone crime. Rather, identity theft typically is a component of one or more other white-collar or financial crimes, such as bank fraud, credit card or access device fraud, or mail fraud. In early 2001, a defendant was charged in a six-count indictment with bank fraud (counts 1, 2, and 3), possession of a counterfeit check (count 4), interstate transportation of a counterfeit check (count 5), and use of another person’s SSN with intent to commit a state felony (count 6). In May 2001, the defendant pleaded guilty to counts 1 and 6 pursuant to a written plea agreement, and the remaining counts were dismissed. The district court sentenced the defendant to concurrent 46-month prison terms for offense conduct under the Identity Theft Act, 18 U.S.C. § 1028(a)(7)—using another person’s SSN with intent to commit a crime— and under 18 U.S.C. § 1344 (bank fraud). U.S. v. Burks, No. 01-3313, 2002 U.S. App. Lexis 2387 (7th Cir. Feb. 11, 2002). This was a consolidated case involving three separate actions, in which three plaintiffs each alleged liability against the defendant car dealership, whose salesman/employee committed criminal acts. Specifically, the salesman/employee wrongly obtained credit reports for the plaintiffs, without their consent, and then used the reports to secure financing for car sales or leases for applicants with bad credit histories. The salesman/employee was convicted on a federal fraud criminal charge under 18 U.S.C. § 1028(a)(7). Also, the plaintiffs established liability against the dealership for intentional violation of the Fair Credit Reporting Act. Benjamin Adams v. Berger Chevrolet, Inc., No. 1:00-CV-225, 1:00-CV- 226, and 1:00-CV-228, 2001 Dist. Lexis 6174 (W.D. Mich. May 7, 2001). A defendant was charged with stealing mail from residential mailboxes, using information from personal checks to create counterfeit checks and fraudulent driver’s licenses, and negotiating the counterfeit checks at numerous banks in North Carolina using the fraudulent licenses as identification. The defendant pled guilty to one count of using false identification documents, 18 U.S.C. § 1028(a)(7); five counts of producing false identification documents, 18 U.S.C. § 1028(a)(1); and three counts of possession of stolen mail, 18 U.S.C. § 1708. The defendant was sentenced to a term of 63 months of imprisonment. U.S. v. Hooks, No. 99-4754, 2000 U.S. App. Lexis 2388 (4th Cir. Sept. 14, 2000). In May 2000, following a bench trial, the district court found a defendant guilty of the following violations using the identification of another with intent to commit unlawful activity, 18 U.S.C. § 1028(a)(7); possessing false identification with intent to defraud the United States, 18 U.S.C. § 1028(a)(4); furnishing false information to the Commissioner of Social Security, 42 U.S.C. § 408(a)(6); fraud and misuse of an entry document, 18 U.S.C. § 1546, and making a false statement to an agency of the United States, 18 U.S.C. § 1001. The court sentenced the defendant to 6 months of imprisonment, plus 3 years of supervised release. U.S. v. Balde, No. 00-4070, 2001 U.S. App. Lexis 23741 (6th Cir. Oct. 26, 2001). A defendant pleaded guilty to using another person’s SSN to commit fraud, 18 U.S.C. § 1028(a)(7); using unauthorized credit cards, 18 U.S.C. § 1029(a)(2); and issuing a false SSN, 42 U.S.C. § 408(a)(7)(B). The defendant was sentenced to 36 months of imprisonment. U.S. v. Lippold, No. 00-2868, 2001 U.S. App. Lexis 15126 (7th Cir. July 2, 2001). This appendix presents a membership overview (see table 9) of the Identity Theft Subcommittee, which was established by the U.S. Attorney General’s White Collar Crime Council in 1999, following passage of the Identity Theft and Assumption Deterrence Act of 1998. As table 9 indicates, in addition to federal law enforcement and regulatory agencies, subcommittee membership has state and local representation through three national organizations: International Association of Chiefs of Police. The association’s goals, among others, are to advance the science and art of police services; develop and disseminate improved administrative, technical, and operational practices and promote their use in police work; and foster cooperation and exchange of information and experience among police administrators. National Association of Attorneys General. A goal of the association—whose membership includes the attorneys general and chief legal officers of the 50 states, the District of Columbia, the Commonwealth of Puerto Rico, and associated territories—is to promote cooperation and coordination on interstate legal matters in order to foster a responsive and efficient legal system for state citizens. National District Attorneys Association. A purpose of the association is to promote the continuing education of prosecuting attorneys by various means, such as arranging seminars and fostering periodic conventions or meetings for the discussion and solution of legal problems affecting the public interest in the administration of justice. Among other sources, training is offered at the National Advocacy Center—located on the campus of the University of South Carolina in Columbia—which is a joint venture of the association and the U.S. Department of Justice. In response to a provision in the Identity Theft and Assumption Deterrence Act of 1998, FTC established the Identity Theft Data Clearinghouse in November 1999 to gather information from any consumer who wishes to file a complaint or pose an inquiry concerning identity theft. Federal, state, and local law enforcement agencies may access the Clearinghouse database via a secure link in FTC’s Consumer Sentinel Network. The Consumer Sentinel Web site was initially established in 1997 to track telemarketing or mass-market fraud complaints received by FTC. With the passage of the Identity Theft Act, FTC added a link in the Consumer Sentinel to allow law enforcement agencies access to the Identity Theft Data Clearinghouse database. In order to gain access to the secure Web site, agencies must sign a confidentiality agreement. Only domestic law enforcement agencies are permitted to have access to the detailed information in the Clearinghouse database. Other domestic government agencies, consumer reporting agencies, and private entities are permitted limited access to overall or aggregate information. Also, at www.consumer.gov/sentinel, the general public can view macro-level information (e.g., overall statistics by states or cities) that FTC maintains on general fraud and identity theft matters. As of May 24, 2002, a total of 352 law enforcement agencies (46 federal and 306 state and local) had entered into agreements with FTC to have access to the Identity Theft Data Clearinghouse via the secure link in the Consumer Sentinel. The following is a list of the 352 agencies. This appendix (1) gives examples of identity theft cases that have a military connection, for example, cases that affect uniformed personnel and (2) discusses plans for establishing Soldier Sentinel, an online system designed specifically to collect consumer and identity theft complaint information from members of the armed forces and their families. Due to various factors, members of the armed services may be more susceptible than the general public to identity theft. For instance, given their mobility, service members may have bank, credit, and other accounts in more than one state and even overseas. At times, service members may be deployed to locations far away from family members, which can increase dependence on credit cards, automatic teller machines, and other remote-access financial services. For these same reasons, while any victim of identity theft can face considerable problems, the rigors of military life can compound problems encountered by uniformed personnel and family members who are victimized. We found no comprehensive or centralized data on the number of military- related identity theft cases. For instance, in response to our inquiry, an official with the Defense Criminal Investigative Service told us the agency’s case information system cannot specifically isolate and quantify the number of identity theft cases. However, in conducting a literature search, we found various examples of military-related identity theft cases, including the following: One case involved over 100 victims, each a high-ranking military official. In this case, according to multi-agency task force results reported by the Social Security Administration’s Office of the Inspector General (SSA/OIG) for fiscal year 2000, two perpetrators used the Internet to obtain the names and SSNs of the military officials. Then, the perpetrators used the personal information to fraudulently obtain credit cards. According to the SSA/OIG, the case culminated with the perpetrators being incarcerated and ordered to pay restitution of over $287,000 to the companies that were victimized by the scheme. Another case, reported in January 2002 by the Army News Service, involved a perpetrator who was caught trying to cash a $9,000 check drawn on the bank account of a Navy retiree. During the subsequent investigation, the perpetrator’s laptop computer was found to contain several thousand military names, SSNs, and other information. The common link among the military veterans on the list was that, in accordance with a once-common practice, they all had filed their military discharge form (Department of Defense Form 214) with their local county courthouse in order to ensure always being able to have a certified copy available to receive Veterans Administration benefits. The Form 214 contains an individual’s SSN and birth date, and the document becomes a public record after being filed; some courthouses have even put this information online. Now, according to the news story, the military’s transition counselors are advising soldiers to not file discharge forms with county courthouses but rather to safeguard any documents that have personal identification information. In a recent (April 17, 2002), press release, the Defense Criminal Investigative Service announced the arrest of a suspect for alleged violations involving one count of identity theft and one count of using a false SSN. Between November 1999 and October 2001, the suspect allegedly assumed the SSN of four different persons. The suspect represented himself as a major with the U.S. Army and conducted fraudulent schemes to obtain a 2001 Nissan truck, a 2002 Mercedes Benz, and a 2002 Jaguar. In addition to the Defense Criminal Investigative Service, two other federal law enforcement agencies (the FBI and the SSA/OIG) and one local agency (St. Tammany Parish Sheriff’s Office, Louisiana) participated in the investigation. Prosecution of the case is to be handled by the U.S. Attorney’s Office, Eastern District of Louisiana. In January 2001, FTC and the Department of Defense announced the signing of a memorandum of understanding to create an online system (Soldier Sentinel) designed specifically to collect consumer and identity theft complaints from the members of the armed forces and their families. Among other purposes, the system is to provide the military community a convenient way to file complaints directly with law enforcement officials. Also, the Department of Defense and its component services are to use the data collected to shape consumer education and protection policies at all levels within the military. Plans call for Soldier Sentinel to mirror the FTC’s Consumer Sentinel system, which provides secure, password-protected access to a consumer complaint database and other tools designed to allow law enforcement to share data about fraud. Also, the Soldier Sentinel agreement allows the Department of Defense and the component services to collect, share, and analyze specific service-related information. In April 2002, FTC staff told us that the Soldier Sentinel was not yet operational but was anticipated to be online during the summer of 2002. In addition to the above, David P. Alexander, Shirley A. Jones, Jan B. Montgomery, Tim Outlaw, Robert J. Rivas, and Richard M. Swayze made key contributions to this report.
|
Identity theft or identity fraud generally involves "stealing" another person's personal identifying information--such as Social Security Number (SSN), date of birth, and mother's maiden name--and then using the information to fraudulently establish credit, run up debt, or take away existing financial accounts. The Identity Theft and Assumption Deterrence Act of 1998 made identity theft a separate crime against the person whose identity was stolen, broadened the scope of the offense to include the misuse of information as well as documents and provided punishment--generally a fine or imprisonment or both. GAO found no comprehensive or centralized data on enforcement results under the federal Identity Theft Act. However, according to a Deputy Assistant Attorney General, federal prosecutors are using the 1998 federal law. As with the federal act, GAO found no centralized or comprehensive data on enforcement results under state identity theft statutes. However, officials in the 10 states selected for study provided examples of actual investigations or prosecutions under these statutes. Generally, the prevalence of identity theft and the frequently multi- or cross-jurisdictional nature of such a crime underscore the importance of promoting cooperation or coordination among federal, state, and local law enforcement agencies. One of the most commonly used means of coordination, task forces, can have participating agencies from all levels of law enforcement and, in some instances, can have participants from banks and other private sector entities. Although the Social Security Administration's Office of the Inspector General fraud hotline annually receives thousands of allegations involving either (1) SSN misuse or (2) program fraud with SSN misuse potential, the agency concentrates its investigative resources on the latter category of allegations because the protection of the Social Security trust funds is a priority.
|
The Davis-Bacon Act and related legislation require employers on federally funded construction projects valued in excess of $2,000 or federally assisted projects to pay their workers no less than the prevailing wage rate. According to its regulations, the Department of Labor determines the wages “prevailing for the corresponding classes of laborers and mechanics employed on projects of a character similar to the contract work in the . . . city, town, village, or other civil subdivision of the State in which the work is to be performed.” The Department of Labor’s Wage and Hour Division (WHD) is responsible for making these wage determinations. In addition to making these determinations, Labor is responsible for ensuring that employers covered by the law pay at least the mandated wage. While some of the same staff work on both responsibilities, Labor estimates that in fiscal year 1997, it used the equivalent of 51 full-time staff and spent about $5.5 million on the process of determining Davis-Bacon prevailing wages at both its Washington, D.C., headquarters and five regional offices. To determine the prevailing wage rates, Labor periodically conducts surveys, called “area surveys,” to collect data on wages and fringe benefits paid to workers in similar job classifications on comparable construction projects in a specific geographic area. The agency solicits information from employers and third parties, such as representatives of unions and contractor associations. As shown in figure 1, after receiving and analyzing the data, Labor issues wage determinations for a series of job classifications such as electricians, carpenters, and drywallers in specific geographic areas varying in size from a section of a county to an entire state. For example, the prevailing wage determination for the Washington, D.C., metropolitan area in 1996 included individual wage rates for 143 different construction crafts. For a more complete description of the wage determination process, see appendix II. Labor relies on the voluntary participation of contractors and other interested parties in conducting the wage survey. While participation is voluntary, failure to supply truthful answers can have serious consequences: it is a crime under federal law (18 U.S.C. 1001) to knowingly submit false data to the government, and it is a crime under federal law (18 U.S.C. 1341) to use the U.S. mail for fraudulent purposes. Contractors and interested third parties generally submit data on wage survey forms, called WD-10s, which are supplied by WHD. As shown in the sample forms WD-10 in figure 2, regardless of whether data are submitted by contractors or third parties, a separate WD-10 is submitted for each construction project and for each contractor that employed workers on that project. Wage rate determinations are issued for different job classifications, such as drywallers and electricians. In fiscal year 1997, WHD issued 1,860 individual wage rate determinations, which were based on information obtained from 43 area wage surveys. In accordance with its regulations, Labor determines an area’s prevailing wage rate on the basis of the 50-percent rule, which states that the prevailing wage will be the wage paid to the majority of workers employed in a specific job classification. If the same rate is not paid to a majority of those workers in the classification, the prevailing wage will be the average of the wages paid, weighted by the total number of workers employed in the classification. See the prevailing wage formula in figure 2 for an example of this calculation using the two hypothetical forms WD-10. We previously reported that Labor’s wage determination process contained internal control weaknesses that contributed to a lack of confidence in the resulting wage determinations. These weaknesses included limitations in Labor’s verification of the voluntarily submitted wage and fringe benefit data. We recognized, however, that accurately reported wage data are not sufficient to ensure the accuracy of the wage determinations. For example, in a previous report, we concluded that reporting bias resulting from the voluntary nature of the wage surveys may also reduce the accuracy of the wage determinations. In 1995, a congressional committee heard specific allegations that inaccurate and fraudulent wage data were submitted and used to determine prevailing wage rates in Oklahoma City. Both GAO and Labor’s Office of Inspector General (OIG) then received congressional requests to review Labor’s wage determination process. Labor responded to the allegations by introducing a policy to verify a sample of wage data forms received from third parties, but it did not extend this verification process to forms submitted by contractors. Before that, Labor had contacted both contractors and third parties to obtain clarification about data that were inconsistent or unclear, but it had not attempted to verify data that were not obviously inaccurate. In May 1996, we recommended that Labor obtain appropriate documentation or conduct a limited number of on-site inspection reviews to verify a sample of wage data submissions. Our recommendation was intended to improve data reliability and increase confidence in the accuracy of wage data in the short term while Labor continued its longer-term efforts to address larger weaknesses in the wage determination process. We expected that verification would also increase the accuracy of future wage determinations by reducing errors through educating both contractors and third parties about how to complete wage data forms and deterring the submission of fraudulent data. In March 1997, Labor’s OIG issued a report on its audit of a judgmental sample of wage data collected for use in calculating prevailing wage rate determinations issued in calendar year 1995. The audit did not find evidence of fraud or deliberate misreporting of wage data. However, OIG determined that inaccurate data submitted by both employers and third parties were frequently used in prevailing wage determinations and that access to payroll records was the most important factor to successfully verifying wage data. OIG echoed our suggestion that verification efforts be viewed as temporary steps until more fundamental reforms in Labor’s survey methodology could be made. In response to the House Appropriations Committee’s directive and our recommendation, Labor has implemented a program to verify wage survey data submitted by construction contractors and interested third parties, such as contractor associations and trade unions. To verify these data, Labor established procedures to select samples of wage data forms for telephone verification that differ depending on the entity that submitted the form. In addition, Labor has hired a private accounting firm to conduct on-site verification reviews. In both the telephone and on-site verification process, all data—regardless of which party submitted them—are verified only with the contractors. In response to our recommendation, in June 1996 Labor expanded its telephone verification process begun the previous year from one of verifying wage data submitted by third parties to one that also verifies wage data submitted by contractors. However, Labor verifies a different percentage of wage data forms submitted by third parties than that for data forms submitted by contractors. In addition, regardless of who submitted the wage data form, Labor asks for supporting documentation only from contractors, not from third parties. Labor verifies a larger percentage of wage data forms submitted by interested third parties than for those submitted by contractors, as shown in figure 3. For wage data submitted by third parties, wage analysts must select every tenth—10 percent—WD-10 wage data form submitted (and no fewer than two WD-10 forms) for verification with the contractor. In contrast, for wage data submitted by contractors, Labor requires that regions select every fiftieth—2 percent—wage data form (but no fewer than five) for telephone verification. Regional wage officials told us that they always use the sample size required by Labor’s national office for contractors, but they exceed it for third parties. For example, wage analysts in one region told us that they verify 100 percent of wage data forms submitted by third parties by requiring that all such forms be signed by contractors who paid the wages that were reported. A senior wage analyst in another region told us that they conduct telephone verification of almost all third-party wage data forms. Labor’s procedures require that wage analysts verify data only with the contractors, not with third parties, even for data submitted by third parties. The procedures require the regional offices to send letters to the contractors selected for verification requesting that supporting payroll documents be mailed to Labor. (See app. III for a sample of the letter Labor sends to contractors.) Wage analysts contact the contractors selected for verification by telephone to verify wage data regardless of whether the contractor has provided the requested documents. Wage analysts told us that they generally do not receive the documents requested from the contractor and, therefore, rely on the contractor’s verbal assurance that the data are correct. When the wages reported by the contractor, either with documentation or with oral confirmation, are different from those originally submitted, Labor replaces the wage data submitted on the WD-10. When the information provided by the contractor does not agree with the data submitted by a third party, regional wage analysts told us that they always take the word of the contractor over the information supplied by the third party. Unlike the process Labor uses with contractors, Labor seldom notifies third parties that the wage data forms they submitted have been selected for verification and does not ask them for documentary evidence to support the data they provided. Even though information from contractors who participate in the verification process sometimes leads to changes in the wage data, Labor includes in the prevailing wage calculation data reported by contractors who refuse to participate in the verification process, thereby assuming these data to be accurate. Labor does not keep records that would allow us to assess how often this occurs. One of Labor’s regional offices, however, provided data showing that in the 18-month period from April 1, 1997, to September 30, 1998, wage analysts in that region were unable to verify data by telephone for 41—57 percent—of the 72 WD-10 forms submitted by contractors that were selected for verification. Labor’s procedures allow it to assume that these data are correct and to include them in the wage calculation. However, this assumption is questionable because, of the remaining 31 forms that were verified by telephone, analysts found errors in data submitted in 9 of the forms, or 29 percent. In April 1997, Labor began a process of on-site wage data verification under a contract awarded to a private accounting firm. As of September 30, 1998, Labor had paid the firm a total of $1 million for on-site verification for fiscal years 1997 and 1998, and had awarded a new contract to the same firm for $500,000 in fiscal year 1999. The contract requires the accounting firm to review a sample of payroll records to verify wage survey data and interview employers to obtain information to assist in Labor’s efforts to reengineer the wage determination process. Labor selects what it describes as a 10-percent sample of the wage data forms submitted by both contractors and interested third parties for a specific area wage survey. It reaches this percentage by selecting every tenth data form for verification. In fact, the percentage of WD-10s selected for on-site verification usually exceeds 10 percent. This is because, after selecting every tenth data form submitted, Labor adds to the set of forms the auditor will review all other usable WD-10s that could be examined at that contractor’s office, such as the forms for other projects for which data were reported. As a result, 10 percent is the minimum sample size; the actual sample size varies from one survey to another. For example, for the 9 surveys for which final audit reports were completed, the actual percentage of WD-10s selected for on-site verification ranged from 10 to 56 percent. As with the telephone verification conducted by WHD wage analysts, auditors verify the data on-site only with the contractor who employed the workers, even when the data were submitted by third parties. The WHD regional office mails the contractors selected for verification a letter notifying them that an auditor will contact them to request a visit to their establishments. (See app. III for a sample letter sent to contractors.) The contractor is asked to make payroll records available to the auditor to confirm that the data reported on the WD-10 are complete and correct. While the contractor’s cooperation with the auditor is requested, it is voluntary for contractors whose wage data cover private construction projects, because Labor is not authorized to require contractors to provide records for such projects. In contrast, contractors on federal projects are required by law to grant access to payroll records related to the federal projects. Labor, however, does not specify this in its letters to these contractors. Labor is concerned that doing so might discourage contractors from participating in future Davis-Bacon surveys, which could reduce the number of survey respondents and thus affect the accuracy of wage determinations. The auditor compares the data reported on the WD-10 with the payroll records for the reported project. Discrepancies between the original WD-10 submitted and the payroll records or contractor’s testimony are recorded by the auditor. After completing the audit of the area wage survey data, the auditor submits a preliminary report to Labor, which includes a list of all discrepancies and a list of contractors that did not participate in the verification process. Labor reviews the preliminary report and makes follow-up telephone calls with contractors as necessary. After Labor reviews the auditor’s findings, the accounting firm submits a final report reflecting changes on which Labor and the firm have agreed. Regional wage analysts told us that only after Labor receives the final report do the analysts incorporate appropriate changes to wage data, recalculate wage data, and forward recommended wage rate determinations to Labor’s national office for final review and issuance. During the 15-month period from the beginning of the on-site verification process in April 1997 to June 30, 1998, Labor sent 85 surveys to the auditors for on-site verification. As of September 30, 1998, the auditors had completed audits for 30 of the 85 area surveys and had issued final reports for 9 of these. The auditors reported finding errors in both the wages and the number of workers reported in the majority of wage data forms they reviewed. Specifically, for the nine surveys for which they had completed final reports, the auditors found errors in wage rates reported in about 70 percent of all wage data forms reviewed. Labor has issued new wage determinations based on four of these nine surveys (see fig. 4). Verification efforts conducted to date will have a limited impact on the accuracy of prevailing wage rate determinations and will increase the time required to issue them. The extent to which the verification process improves the accuracy of Labor’s prevailing wage determinations will be limited by the congressional directive to use a random sample of wage data forms to select wage data for verification and the procedures Labor uses to implement the directive. In addition, on-site verification has added time to the wage determination process, increasing the likelihood that data used will be outdated. The on-site verification process has, however, provided information that Labor is using as it tries to improve the process for future wage determinations. Although Labor has identified and corrected numerous errors in the wage data submitted, it has been able to correct only the limited number of wage data forms verified. Since this represents only a small portion of the total number of data forms submitted, these corrections have only a minimal impact on the accuracy of the data used to calculate wage determinations. As a result, even though we found that errors the auditors identified in all nine area surveys averaged 76 cents per hour, Labor officials estimate that the changes to wage determinations will amount to an average of 10 cents per hour. Furthermore, both the Committee directive to use a random sample of wage data forms to select wage data for verification and the procedures Labor uses to implement the directive also limit the extent to which errors found will improve the accuracy of wage determinations. While a random sample is often assumed to be the most effective approach to selecting a sample, it is not the best approach for verification in this situation. It would be impractical to verify a large enough random sample of wage forms to ensure that verification would have an impact on the accuracy of the wage determination. Moreover, the procedure Labor uses to verify wage data (1) does not take into account whether the data it selects for verification are likely to be used in calculating wage determinations, (2) assumes that data from contractors that refuse access to supporting documentation are correct, and (3) does not attempt to verify data with third-party submitters when contractors are unable to provide or refuse access to supporting documentation. Although the House Committee directed Labor to use a random sample to select wage data forms to verify the accuracy of wage data, Labor does not select a large enough number of data forms to ensure that the errors found will improve the accuracy of wage determinations, nor would it be feasible to do so. Although random sampling is sometimes the best approach to selecting data, in some circumstances other sampling strategies are more effective. A random sample would allow Labor to assume that data found to be in error were representative of all data submitted, and Labor could adjust the prevailing wage rate rather than adjusting the data on only the WD-10s selected for verification. However, the sample size needed for this approach would require Labor to verify most of the wage data submitted. Conversely, a carefully chosen judgmental sample would allow Labor to select wage data forms for which correcting errors found would have the greatest effect on the accuracy of the wage determinations. To select a representative random sample that would ensure the accuracy of the data used to determine prevailing wage rates, Labor would have to sample workers within each job classification rather than wage data forms, which often combine job classifications, as it does now. Labor currently determines the amount of wage data to be verified by selecting a uniform percentage of WD-10s for each area survey, ranging from 2 percent to 10 percent. However, because Labor determines multiple prevailing wage rates, one for each job classification, it must select a sample of wage data from every job classification within a survey to ensure a representative sample for all prevailing wage rates. Since wage data forms often include data for multiple job classifications, sampling wage data forms alone does not ensure representativeness within specific job classifications. Labor would also have to select a sufficient number of workers within each job class to meet the statistical criteria for appropriately projecting from sampled cases to all the wage data. However, data submitted on the number of workers within a job class can be small, often fewer than 10. As a result, Labor would need to select a sample size equal or close to the total number of workers. For example, we calculated that the sample size required for a statistically representative sample would require that Labor verify all data submitted for 40 of 45 job classifications in one fiscal year 1997 area survey in order to be within 50 cents per hour of the correct wage. For all job classifications, Labor would have had to verify the wages of more than 5 times the number of workers it verified, 439 rather than 78 of the 664 workers for whom wages were reported. (See app. IV.) Using a random sample does not allow Labor to judgmentally select for verification wage data that will have the greatest potential impact on accuracy. For example, Labor verifies wage data that it does not expect to, and does not, use to calculate prevailing wage determinations. In addition, Labor does not consider the cost of travel and staff time in selecting wage data forms to verify. Of the 30 area surveys for which on-site verification preliminary reports had been completed as of September 30, 1998, 29 included verification of wage data that would not be used in calculating prevailing wage rates. The data verified would not be used for two reasons. First, in some instances, wage determinations would be based on wage rates included in collective bargaining agreements, not on the wage data reported—whether the data were accurate or not. In general, when the same wage is paid to 50 percent or more of the workers employed in a job classification, the wage rate is the same because it is specified in a collective bargaining agreement that covers the specified job classification. When this occurs, Labor does not determine the prevailing wage rates on the basis of data reported on the WD-10s. Instead, it uses the collective bargaining rate as the prevailing wage rate. Second, in some instances, Labor knew it would not use the wage data because it had received insufficient data within the specific job classification to allow it to issue a wage determination. Labor’s procedures require that it receive responses from a minimum of 25 percent of the contractors and third parties it contacts for data and wages covering a minimum of three workers from two contractors to determine the prevailing wage. When Labor receives too few responses, it does not issue a new wage determination. For example, for one of the four area surveys for which Labor has issued prevailing wage rate determinations, none of the verified wages were used to determine the prevailing wage rates. For this specific survey, the rate specified in the collective bargaining agreement was used for 34 of the 36 individual job classifications. Data for the remaining two job classifications were not used because Labor did not receive sufficient data for these two job classifications from the survey responses. Thus, although the on-site verification for this one survey cost about $40,000, and it required 5 weeks to complete the preliminary report, none of the results were used. Labor also does not balance the benefits against the costs of verifying specific wage data forms when selecting its sample. For example, selecting wage data forms with wages reported for the greatest number of workers within a specific job classification has the potential for greater benefit in improving the accuracy of the wage determination than does selecting and verifying wages for a smaller number of workers. Using the sample WD-10s in figure 2, if Labor’s random sample resulted in verifying submission 2, which includes data for only 2 of a total of 82 workers for whom wages were used in the calculation, correcting even large errors would have little impact on the prevailing wage determination because only these 2 wage rates could be adjusted. On the other hand, even a small error in wage rates reported for the 80 workers would have a direct effect on the wage determination. Using the hypothetical WD-10s in figure 2, if the fringe benefits reported on submission 1 were incorrectly omitted from submission 2, correcting the wages would increase the prevailing wage rate from $18.64 to $18.71, an increase of 7 cents. However, as figure 5 shows, if the workers did not receive the fringe benefits on submission 1, which includes data on 80 workers, correcting the error would reduce the prevailing wage rate from $18.64 to $16.00, a decrease of $2.64. Labor also does not consider the cost of accessing payroll records when selecting wage data forms for verification. To do on-site verification, the accounting firm’s auditors must contact individual contractors and visit their administrative offices to review payroll records; these offices may be located far from one another. For example, as shown in figure 6, for the area wage survey covering Lawrence and Greene counties in Pennsylvania, contractors’ offices were located in six states and were as far away as Texas and Massachusetts. Using Labor’s systematic selection process, auditors would attempt to visit the contractor’s administrative office for submission 1 in our example, regardless of the great distance from the other sites and the small number of drywallers whose reported wages were to be verified. Labor assumes that data from contractors that are unable to provide or refuse access to supporting documentation are correct by including them in the wage rate calculations. A more reasonable assumption would be that data from contractors who refuse access have a greater chance of being inaccurate than data from contractors who provide access. Labor did not have comprehensive data on the reasons contractors would not provide access to the payroll records necessary to verify reported wage data. Both Labor’s wage analysts and the on-site accounting firm auditors reported that contractors had many reasons for not participating, such as not wanting to devote the necessary resources to access the records or that the records were no longer available. It is also possible, however, that wage data may be fraudulent or carelessly reported, because contractors who knowingly submit fraudulent data may be unlikely to voluntarily submit to an audit of their payroll records out of fear of prosecution for committing a federal crime. As shown in figure 7, 27 percent of the contractors selected for verification either refused or were unable to provide on-site auditors access to some or all of the payroll records required for verification. For the 30 on-site audits for which Labor has received preliminary reports, auditors were unable to visit 20 percent, or 59, of the 293 contractors selected for on-site verification either because the contractors would not participate in on-site verification or the accounting firm was unable to schedule an acceptable time for auditors to visit. Another 7 percent of the contractors denied or were unable to provide access to some or all of the necessary payroll records after the auditors arrived at the contractors’ offices. Labor’s OIG found in its verification review that access to payroll records was the most important factor in successfully verifying wage data. While Labor does not have legal authority under the Davis-Bacon Act to access payroll records for workers involved in private construction, it does have authority to access such records for federally funded or assisted construction work covered by the act. Labor officials told us that they do not exercise this right because to do so might result in reduced accuracy of future wage determinations if it discouraged contractors from voluntarily providing wage data for future surveys. Labor’s procedure of verifying wage data only with contractors also limits the accuracy gains achievable from the verification process and could actually result in decreased accuracy. For example, our review of on-site verification reports found wage data, including fringe benefit data, submitted by third parties that auditors were unable to verify through the contractor. The contractor either did not have records on fringe benefits paid or refused the auditors access to any payroll records. In some cases in which the contractor did not verify the accuracy of the fringe benefits, the auditors recorded $0 under the fringe benefits as though the reported data were inaccurate. These workers, however, were covered by a collective bargaining agreement. Especially for fringe benefits paid under collective bargaining agreements, unions often have documentation to verify amounts paid. In fact, regional wage analysts told us that, in some cases, unions may be the only source of data on fringe benefits. By not seeking documentation from the third party, the verification process may have erroneously reduced the amount of wages and fringe benefits paid and thus contributed to a less accurate wage determination. As would be expected, verification efforts have increased the time required to issue wage determinations after the area survey has been completed. Labor does not collect data on the amount of time required to complete telephone verification, but Labor officials who administer the wage determination process estimated that telephone verification added an average of 2 weeks to the process. Telephone verification can be accomplished relatively quickly because Labor can conduct telephone verification as wage data are being submitted. In addition, it does not require travel, which would add time and expense. On-site verification, however, adds much more time—months rather than weeks—to the process because (1) it requires travel and (2) in order to identify all wage data forms related to a specific contractor and more efficiently manage travel, it does not begin until after the survey cutoff date for wage forms has passed. In fact, regional wage analysts told us that they do not send the surveys to the accounting firm for on-site verification until the telephone verification and all preliminary analysis have been completed such that wage determinations are ready to be forwarded to the national office for review and publication. Our analysis of the 30 area surveys for which the auditors submitted preliminary reports shows that the time between when Labor sent the area survey data to the accounting firm for verification and when Labor received the firm’s preliminary report ranged from 36 to 408 days, with an average of 211 days. However, Labor officials told us they cannot begin final calculations until they receive the final report from the auditors, which incorporates the results of discussions with Labor. Other Labor activities, such as reviewing the results of on-site verification audits and making any necessary adjustments to wage determinations before issuance, also add time to the wage determination process, but Labor has no data to estimate the amount of time these activities take. Thus, while the impact verification is having on timeliness is greater than the time elapsed between when Labor forwards the surveys to the auditors and receives the preliminary report, the total time required is not available. WHD officials told us that they expect this delay will decrease over time, attributing some of it to the time required for WHD staff and accounting firm auditors to master the verification process. While the effect of the verification process on the accuracy of data used in wage determinations has been minimal, these efforts may have a greater impact over the long term by deterring contractors and third parties from submitting inaccurate or fraudulent data, educating contractors about wage data form procedures, and assisting Labor in its reengineering efforts. Labor officials stated that they had focused their verification procedures on identifying and deterring fraud rather than on ensuring the accuracy of the wage determinations. But they also told us that the value of verification as a deterrent to the submission of fraudulent wage data must be balanced with its potential to deter voluntary participation in future Davis-Bacon Act surveys, which could, conversely, reduce the accuracy of the wage determinations. Verification efforts may help educate contractors about how to complete wage survey forms properly. But Labor’s procedure of not including third-party submitters in the verification process limits the potential for verification to improve the accuracy of future wage determinations. Third parties do not benefit from the potential educational value that verification has because they are not informed of any errors identified on the WD-10s they submitted, nor do they learn how to properly complete them. Through its verification efforts, Labor has also obtained information that it is using in its long-term efforts to reengineer the wage determination process. Labor included in the on-site verification process questions to contractors about the wage survey form and its terms, such as whether the contractors had difficulty understanding and completing the survey form. Of those contractors who reported confusion or difficulty with the form, many identified the “peak week” and the number of workers employed during the peak week as major sources of confusion. In addition, the accounting firm found in the course of its on-site payroll verification that contractors and third parties that submitted wage data had difficulty completing the form, including accurately identifying the job classification of workers. The auditors found that these difficulties affected the accuracy of the wage data reported. For example, for the nine area surveys for which the auditors completed final reports, they identified errors in wage data for 38 percent of all contractors visited caused by misidentification of the peak week. Labor is also redesigning its wage reporting form, which responds to concerns raised by contractors during on-site verification. Labor is also piloting a statewide survey using wage data for “total man-hours” in place of wage data for the peak week. Without accurate and timely data, Labor cannot determine prevailing wage rates that correctly reflect the labor market. While obtaining accurate wage data through Labor’s voluntary surveys will not ensure that wage rate determinations are accurate, inaccurate data guarantee inaccurate wage determinations. We recognize that achieving 100-percent accuracy is not possible. However, inaccurate prevailing wage determinations could lead to the payment of wages that are either lower than what workers should receive or higher than the actual prevailing wages, which would inflate construction costs at the taxpayers’ expense. A system to verify wage data submitted by contractors and third parties is necessary to ensure that inaccurate data do not have a negative effect on the prevailing wage determination. As directed by the House Appropriations Committee, Labor has implemented a process to verify wage data submitted by both contractors and third parties. This process allows it to identify and correct errors it finds in wage data reported. It may also have a positive impact on the accuracy of wage data obtained in future wage surveys by educating contractors on the proper completion of wage data forms. In addition, this process has helped Labor obtain information that will assist it in reengineering efforts. For example, errors in wages reported often occur because of confusion by contractors and third parties over how to report workers and wages for the peak week. Labor is exploring the alternative use of “man-hours” rather than peak week, which may be easier for contractors and third parties to report. The process Labor is using, however, is unnecessarily costly, in terms of both money and time. On-site verification is a costly approach to verifying wage data, and it delays the issuance of wage rate determinations by months, especially when compared with telephone verification that is combined with supporting payroll records submitted upon request. On-site verification requires a cadre of auditors to travel to worksites around the country to review payroll records. While Labor’s OIG found that access to payroll records was essential to successfully verifying wage data, the process need not require that the contractor be contacted in person rather than by telephone. Obtaining copies of payroll records by mail as part of the telephone verification is significantly less costly and takes less time. Relying on a random rather than a judgmental sample limits the accuracy gains achievable through verification. While a random sample is often assumed to be better than a judgmental sample, it is actually less effective in selecting wage data forms to verify that will have an impact on the accuracy of the wage determinations. Labor does not gain the most significant benefits of a random sample—that is, being able to assume that errors found in verified wage data forms are representative of all wage data forms and adjust wage rates accordingly—because it is not feasible to verify a sufficiently large number of wage data forms. In contrast, a carefully designed judgmental sample to select contractors for verification could consider the likelihood that the data will be used, the number of workers within a job classification, and the geographic dispersion of contractors. The impact of Labor’s verification procedures on the accuracy of the wage determinations is also limited by the action it takes when documentation cannot readily be obtained from a contractor. In our view, at least two aspects of Labor’s verification procedures contribute to limiting accuracy. First, contractors may refuse access to supporting documentation for many legitimate reasons—such as the time required—but contractors who refuse to provide the supporting documentation are more likely than those who provide access to have submitted fraudulent or carelessly reported data. Labor, however, (1) accepts the data and uses it as if documentation had supported it and (2) allows federal contractors to deny access to the supporting documentation even though Labor has the legal authority to access their records. While discarding all such data might have negative consequences on Labor’s ability to issue wage determinations, accepting all such data may contribute to inaccurate wage determinations. Labor’s approach does not achieve the needed balance in deciding which data to include and exclude. Second, Labor sometimes eliminates or revises data inappropriately when it does not seek supporting documentation from third parties that have submitted it. Wage data submitted for a project by a third party are generally verified against the payroll or oral testimony of the contractor associated with that project. The third party that submitted the original WD-10 is not provided the opportunity to validate the information it submitted; final corrections are generally provided only by the contractor. As a result, data supplied by third parties may be eliminated or revised inaccurately, even though, in some cases, only the third party, not the contractor, can provide supporting documentation, for example, for benefits provided by a union. Supporting documentation provided by third parties could improve the completeness and thus the accuracy of the data used. To reduce the cost of verification and increase the benefits, we recommend that the Secretary of Labor direct the WHD Administrator to revise verification procedures to maximize the expected value to be gained from verification. Specifically, Labor should increase the use of telephone verification—while decreasing on-site verification audits—and increase efforts to obtain payroll documentation from all selected submitters; change the procedures used to select wage data for verification, using a judgmental sample of wage data forms based on the potential impact of the data on prevailing wage rate determinations rather than using a random sample; and revise verification procedures to take more appropriate action when documentation cannot readily be obtained from a contractor, such as not using data when supporting documentation is requested but not provided, requiring documentation where possible, and giving third parties an opportunity to provide supporting documentation for data they submitted. We provided a draft of this report to the Department of Labor for its review and comment. It generally agreed with our recommendations and agreed to implement them by revising the verification process. Labor also stated that our report was generally helpful and that some of our recommendations would decrease costs and improve timeliness. However, the Department took issue with some of our conclusions concerning the accuracy of survey data submissions by contractors and the use of data from contractors who refuse auditors access to supporting payroll records. Labor also provided technical comments and corrections, and we have revised our report as necessary. With regard to the accuracy of the survey data, Labor stated that despite the many errors found by both the on-site auditors and Labor’s OIG, the limited revisions to wage determinations that resulted from correcting these errors demonstrated that the wage determinations closely approximated the accurate prevailing wage rate. We disagree, because the small number of data submissions verified is not a valid representative sample of all data submissions used to calculate the revised wage determinations. Furthermore, the OIG’s report does not support Labor’s conclusion. It states that: “The errors we discovered did not materially change many of the wage decisions because the data we sampled often represented a small portion of the responses for an individual WH survey. . . . If we had conducted more payroll reviews, we believe more exceptions would have been identified and would have revealed more material errors in published wage decisions.” With regard to the use of data when contractors refuse access to the supporting payroll records, Labor disagreed with our conclusion that contractors unable or unwilling to provide auditors access to payroll records are more likely to have submitted fraudulent data than those who provide records. Labor based its conclusions that contractors try to provide accurate and complete information on the data verification that has been done to date by both Labor’s OIG and WHD. Basing conclusions about contractors unwilling or unable to provide access to payroll records on verification of data from contractors that do provide access is not logical or convincing. We continue to believe that employers submitting fraudulent or unsubstantiated wage data forms are unlikely to voluntarily provide access to payroll records for review. Because all verification efforts conducted to date, including those of the OIG, have relied on voluntary access to payroll records, the absence of fraud in verified wage data submissions provides no evidence that contractors who denied access did not submit fraudulent data. We have, however, clarified our conclusion on the use of data submitted by contractors or third parties that do not cooperate with verification efforts to allow Labor analysts to use judgment in deciding when to exclude such data from its wage determination calculations. For example, we agree that Labor should consider including data from contractors that routinely cooperated with data verification efforts in the past and whose data were determined to be generally accurate. Another factor to consider would be the possible adverse impact of discarding specific data on Labor’s ability to issue a wage determination. Finally, in agreeing to select data using a judgmental sample, Labor stated that it intends to continue selecting some data for verification using a systematic sample, albeit fewer than it does now. To the extent that data selected randomly represent a small segment of all data verified, Labor’s proposed approach is consistent with our recommendation. We are sending copies of this report to the Secretary of Labor, appropriate congressional committees, and other interested parties. Please call me at (202) 512-7014 or Larry Horinko at (202) 512-7001 if you or your staffs have any questions about this report. Other major contributors were John Carney, Robert G. Crystal, Lise Levie, Ann P. McDermott, Elizabeth T. Morrison, and Ronni Schwartz. The House Committee on Appropriations in its reports on appropriations for the Departments of Labor, Health and Human Services, and Education and related agencies for fiscal years 1998 and 1999 asked us to (1) review the Department of Labor’s efforts to verify a random sample of employers’ wage data submissions and select a sample of submissions for on-site data verification and (2) determine the likely effect of these efforts on the accuracy and timeliness of Davis-Bacon wage determinations. To describe Labor’s actions, we interviewed Department of Labor officials in the Wage and Hour Division (WHD) at both headquarters and the five regional offices responsible for determining prevailing wage rates. At the southeast regional office in Atlanta, Georgia, we interviewed WHD officials and reviewed the wage determination process in more detail, including reading relevant documentation and reviewing the process used to create a computerized database of wage data forms. We also obtained a draft of WHD’s procedures for telephone and on-site verification and task orders for on-site verification. In addition, we interviewed representatives of the private accounting firm contracted to conduct on-site payroll verification, who provided time schedules, cost data, and other information about the firm’s on-site reviews. To determine the likely effect of Labor’s verification efforts, we obtained and analyzed data from a number of different sources. These data include the following: all available WHD preliminary analyses (forms WD-22) for all wage surveys sent to the contractor for verification for the 18-month period from the beginning of on-site verification in April 1997 through September 1998; all preliminary and final reports completed by the accounting firm for on-site audits as of September 30, 1998; and electronic records of wage data forms (forms WD-10) maintained by WHD under contract with Computer Data Systems, Inc., concerning the area surveys for which the accounting firm had issued final reports for on-site verification. Using these and other data provided by WHD, we conducted several analyses. To obtain the average error amount in wage rates and the percentage of wage data forms with errors for the nine area surveys for which the accounting firm issued final reports, we identified the dollar value of errors on each WD-10 by job classification. We determined the dollar value of errors in wage rates by calculating the absolute value of the difference between the sum of the reported wages and fringe benefits and the sum of the verified wages and fringe benefits. We weighted the amount of error by the lower of the number of employees reported or the number of employees verified. Because the auditors were not consistent in their analysis and reporting, as necessary we made assumptions about individual wage rates, such as when an average wage rate was reported but individual wage rates were verified. We then calculated the average of the absolute value of error for all workers. To determine the percentage of contractors providing full and partial access to payroll records, we analyzed data in the 30 preliminary reports summarizing the results of on-site verification audits. Specifically, we counted the number of selected contractors the auditors reported as refusing access and those the auditors failed to access despite persistent efforts over a matter of weeks or months. For those contractors who allowed the auditors access to the workplace, we identified the number of contractors unable or unwilling to provide access to the payroll records necessary to verify the wage rates reported on the selected forms WD-10. To determine the amount of time between when Labor sent area survey data to the auditor for on-site verification and when Labor received the auditor’s preliminary report, we relied on data provided by WHD for the 30 surveys with preliminary reports completed by the auditor. We computed the time elapsed between the date Labor sent the survey to the auditor for on-site verification and the date the regional office received the preliminary report. In addressing the second objective, we recognize that while accurate wage data are necessary, they are not sufficient to ensure that wage determinations accurately reflect the prevailing wage that a contractor would have to pay to obtain construction workers from the local area at the market wage. Other issues must be considered to improve the wage determination process, such as the time lag between obtaining the wage surveys and issuing the wage determinations. Labor is exploring options to reengineer its wage determination process in the long term, which we will review at a later date. We did not attempt to assess the accuracy of the prevailing wage determinations that result from these surveys, which was outside the scope of this review. We also did not verify the results of on-site audit reviews; we focused on problems with procedures used rather than contract compliance. The Davis-Bacon Act requires that workers employed on federal construction contracts valued in excess of $2,000 be paid, at a minimum, wages and fringe benefits that the Secretary of Labor determines to be prevailing for corresponding classes of workers employed on projects that are similar in character to the contract work in the geographic area where the construction takes place. To determine the prevailing wages and fringe benefits in various areas throughout the United States, Labor’s WHD periodically surveys wages and fringe benefits paid to workers in four basic types of construction (building, residential, highway, and heavy). Labor has designated the county as the basic geographic unit for data collection, although Labor also conducts some surveys setting prevailing wage rates for groups of counties. Wage rates are issued for a series of job classifications in the four basic types of construction, so each wage determination requires the calculation of prevailing wages for many different trades, such as electrician, plumber, and carpenter. For example, in 1996 the prevailing wage rates for the Washington, D.C., metropolitan area included wage rates for 143 different construction trade occupations. Because there are over 3,000 counties, more than 12,000 surveys could be conducted each year if every county in the United States was surveyed. In fiscal year 1997, Labor issued 1,860 individual wage rate determinations based on 43 area wage surveys. As shown in figure 1, Labor’s wage determination process consists of four basic stages: planning and scheduling surveys of employer wages and fringe benefits in similar job classifications on comparable construction projects; conducting surveys of employers and third parties, such as representatives of unions or industry associations, on construction projects; clarifying and analyzing respondents’ data; and issuing the wage determinations. Labor annually identifies the geographic areas that it plans to survey. Because Labor has limited resources, a key task of Labor’s staff is to identify those counties and types of construction most in need of a new survey. In selecting areas for inclusion in planned surveys, the regional offices establish priorities based on criteria that include the need for a new survey based on the volume of federal construction in the area; the age of the most recent survey; and requests or complaints from interested parties, such as state and county agencies, unions, and contractors’ associations. If a type of construction in a particular county is covered by a wage determination based on collective bargaining agreements (CBA) and Labor has no indication that the situation has changed such that a wage determination should now reflect nonunion rates, an updated wage determination may be based on updated CBAs. The unions submit their updated CBAs directly to the national office. Planning begins in the third quarter of each fiscal year when the national office provides regional offices with the Regional Survey Planning Report (RSPR). The RSPR provides data, obtained under contract with the F.W. Dodge Division of McGraw-Hill Information Systems, showing the number and value of active construction projects by region, state, county, and type of construction and giving the percentage of total construction that is federally financed. Labor uses the F.W. Dodge data because they comprise the only continuous nationwide database on construction projects. Labor supplements the F.W. Dodge data with additional information provided to the national office by federal agencies regarding their planned construction projects. The RSPR also includes the date of the most recent survey for each county and whether the existing wage determinations for each county are union, nonunion, or a combination of both. Using this information, the regional offices, in consultation with the national office, designate the counties and type of construction to be included in the upcoming regional surveys. Although Labor usually designates the county as the geographic unit for data collection, in some cases more than one county is included in a specific data-gathering effort. The regional offices determine the resources required to conduct each of the priority surveys. When all available resources have been allocated, the regional offices transmit to the national office for review their schedules of the surveys they plan to do: the types of construction, geographic area, and time periods that define each survey. When Labor’s national office has approved all regional offices’ preliminary survey schedules, it assembles them in a national survey schedule that it transmits to interested parties, such as major national contractor and labor organizations, for their review and comment. The national office transmits any comments or suggestions received from interested parties to its affected regional offices. Organizations proposing modifications of the schedule are asked to support their perceived need for alternative survey locations by providing sufficient evidence of the wages paid to workers in the type of construction in question in the area where they want a survey conducted. The target date for establishing the final fiscal year survey schedule is September 15. Once the national office has established the final schedule, each regional office starts to obtain information it can use to generate lists of survey participants for each of the surveys it plans to conduct. Each regional office then contacts Construction Resources Analysis at the University of Tennessee, which applies a model to the F.W. Dodge data that identifies all construction projects in the start-up phase within the parameters specified in the regional office’s request and produces a file of projects that were active during a given time period. The time period may be 3 months or longer, depending on whether the number of projects active during the period is adequate for a particular survey. F.W. Dodge provides information on each project directly to the regional offices. The F.W. Dodge reports for each project include the location, type of construction, and cost; the name and address of the contractor or other key firm associated with the project; and, if available, the subcontractors. When the F.W. Dodge reports are received by the regional offices, Labor analysts screen them to make sure the projects meet four basic criteria for each survey. The project must be of the correct construction type, be in the correct geographic area, fall within the survey time frame, and have a value of at least $2,000. In addition to obtaining files of active projects, Labor analysts are encouraged to research files of unsolicited information that may contain payment evidence submitted in the past that is within the scope of a current survey. When the regional offices are ready to conduct the new surveys, they send a WD-10 wage reporting form to each contractor (or employer) identified by the F.W. Dodge reports as being in charge of one of the projects to be surveyed, together with a transmittal letter that requests information on any additional applicable projects the contractor may have. Every WD-10 that goes out for a particular project has on it a unique project code, the location of the project, and a description of it. Data requested on the WD-10 include a description of the project and its location, in order to assure the regional office that each project for which it receives data is the same as the one it intended to have in the survey (see examples in fig. 2). The WD-10 also requests the contractor’s name and address; the value of the project; the starting and completion dates; the wage rate, including fringe benefits, paid to each worker; and the number of workers employed in each classification during the week of peak activity for that classification. The week of peak or highest activity for each job classification is the week when the most workers were employed in that particular classification. The survey respondent is also asked to indicate which of four categories of construction the project belongs in. In addition, about 2 weeks before a survey is scheduled to begin, regional offices send WD-10s and transmittal letters to a list of third parties, such as national and local unions and industry associations, to encourage participation. Labor encourages the submission of wage information from third parties, including unions and contractors’ associations that are not the direct employers of the workers in question, in an effort to collect as much data as possible. Third parties that obtain wage data for their own purposes may share it with Labor without identifying specific workers. For example, union officials need wage information to correctly assess workers’ contributions toward fringe benefits. Third-party data generally serve as a check on data submitted by contractors if both submit data on the same project. Regional offices also organize local meetings with members of interested organizations to explain the purpose of the surveys and how to fill out the WD-10. Because the F.W. Dodge reports do not identify all the subcontractors, both the WD-10 and the transmittal letter ask for a list of subcontractors on each project. Subcontractors generally employ the largest portion of on-site workers, so their identification is considered critical to the success of the wage survey. Analysts send WD-10s and transmittal letters to subcontractors as subcontractor lists are received. Transmittal letters also state that survey respondents will receive an acknowledgment of data submitted and that the respondent should contact the regional office if one is not received. Providing an acknowledgment is intended to reduce the number of complaints that data furnished were not considered in the survey. Labor analysts send contractors who do not respond to the survey a second WD-10 and a follow-up letter. If they still do not respond, analysts attempt to contact them by telephone to encourage them to participate. As the Labor wage analysts receive the completed WD-10s in the regional offices, they review and analyze the data. Labor’s training manual guides the analyst through each block of the WD-10, pointing out problems to look for in data received for each one. Analysts are instructed to write the information they received by telephone directly on the WD-10 in a contrasting color of ink, indicating the source and the date received. They are instructed to draw one line through the old information so it is still legible. Labor’s wage analysts review the WD-10s to identify missing information, ambiguities, and inconsistencies that they then attempt to clarify by telephone. For example, an analyst may call a contractor for a description of the work done on a project in order to confirm that a particular project has been classified according to the correct construction type. An analyst may also call a contractor to ask about the specific type of work that was performed by an employee in a classification that is reported in generic terms, such as a mechanic. In that situation, the analyst would specify on the WD-10 whether it is a plumber mechanic or some other type of mechanic to make sure that the wages reported are appropriately matched to the occupations that are paid those rates. Similarly, because of variations in area practice, analysts may routinely call to find out what type of work the employees in certain classifications are doing. This is because in some areas of the country some contractors have established particular duties of traditional general crafts—for example, carpenters—as specialty crafts, which are usually paid at lower rates than the general craft. See letter portion of this report for a description of the verification process. When an analyst is satisfied that any remaining issue with respect to the data on the forms WD-10 for a particular project have been resolved, the data are recorded and tabulated. The analyst enters them into a computer, which uses the data to generate a Project Wage Summary, form WD-22a, for reporting survey information on a project-by-project basis. The WD-22a has a section for reporting the name, location, and value of each project; the number of employees who were in each classification; and their hourly wage and fringe benefits. It also has a section for reporting the date of completion or percentage of the project completed, whichever is applicable. At least 2 weeks before the survey cutoff date, the response rate for the survey is calculated to allow time to take follow-up action if the response rate is determined to be inadequate. For example, WHD operational procedures specify that if data gathered for building or residential surveys provide less than a 25-percent usable response rate or less than one-half of the required key classes of workers, the analyst will need to obtain data from comparable federally financed projects in the same locality. If an analyst has no data on occupations identified by Labor as key classifications of workers for the type of construction being surveyed, Labor’s procedures require him or her to call all the subcontractors included in the survey who do that type of work and from whom data are missing, to try to get data. If the analyst still cannot obtain sufficient data on at least one-half of the required key classes, consideration must be given to expanding the scope of the survey geographically to get more crafts represented. If the overall usable response rate for the survey is 25 percent or more, data on three workers from two contractors are sufficient to establish a wage rate for a key occupation. After the survey cutoff date, when all valid data have been recorded and tabulated, the final survey response rate is computer-generated. Typically, it takes a WHD analyst 4 months to conduct a survey. Once all the valid project data have been entered, the prevailing wage rate for each classification of worker can be generated by computer. If there is a majority of workers paid at a single rate in a job classification, that rate prevails for the classification. The wage rate needs to be the same, to the penny, to constitute a single rate. If there is no majority paid at the same rate for a particular classification, a weighted average wage rate for that occupation is calculated. The prevailing wage rate for each occupation is compiled in a computer-generated comprehensive report for each survey, called the Wage Compilation Report, form WD-22. The WD-22 lists each occupation and the wage rate recommended for that occupation by the regional office. The form indicates whether the rate is based on a majority or a weighted average, and provides the number of workers for which data were used to compute each wage rate. The regional offices transmit survey results to the national office, which reviews the results and recommends further action if needed. When all its recommendations have been acted upon, the national office issues the wage determination. These determinations are final. There is no review or comment period provided to interested parties before they go into effect. Access to wage determinations is provided both in printed reports available from the U.S. Superintendent of Documents and on an electronic bulletin board. Modifications to general wage determinations are published in the Federal Register. An interested party may seek review and reconsideration of Labor’s final wage determinations. The national office and the regional offices accept protests and inquiries relating to wage determinations at any time after a wage determination has been issued. The national office refers all the complaints it receives to the relevant regional offices for resolution. Most inquiries are received informally by telephone, although some are written complaints. Regional office staff said that a majority of those with concerns appear to have their problems resolved after examining the information (collected on a form WD-22a) for the survey at issue, because they do not pursue the matter further. If an examination of the forms does not satisfy the complainant’s concerns, the complainant is required to provide information to support his or her claim that a wage determination needs to be revised. The national office modifies published wage determinations in cases in which regional offices, on the basis of evidence provided, recommend that it do so, such as when it has been shown that a wage determination was the result of an error by the regional office. However, some of those who seek to have wage rates revised are told that a new survey will be necessary to resolve the particular issue that they are concerned about. For example, if the wage rates of one segment of the construction industry were not adequately reflected in survey results because of a low rate of participation in the survey by that segment of the industry, a new survey would be necessary to resolve this issue. Those who are not satisfied with the decision of the regional office may write to the national office to request a ruling by Labor’s WHD Administrator. If the revision of a wage rate has been sought and denied by a ruling of Labor’s WHD Administrator, an interested party has 30 days to appeal to the Administrative Review Board for review of the wage determination. The board consists of three members appointed by the Secretary of Labor. The Solicitor of Labor represents WHD in cases involving wage determinations before the Administrative Review Board. A petition to the board for review of a wage determination must be in writing and accompanied by supporting data, views, or arguments. In reviewing Labor’s process of sampling wage data for verification, we identified problems with its sampling methodology that are primarily technical in nature. Specifically, although the Congress directed that Labor use a random sample in selecting wage data to verify, and Labor describes its sample as being “random,” the selection method used does not meet the criteria for randomness. Randomness would require that each WD-10 have an equal opportunity for being selected. However, while Labor uses a systematic sample that does not target any specific wage data for verification, it fails to meet the criteria for randomness because not all wage submissions have an opportunity of being selected. Labor uses a systematic sample, organizing WD-10s by project and by contractor prior to selection and then selecting them based on a fixed interval. Labor does not require that the first WD-10 selected be based on a number chosen purely by chance. For example, to select a sample for telephone verification of data submitted by contractors, Labor procedures direct the wage analyst to select the 50th, then the 100th. However, because the data are organized prior to selection, the first 49 WD-10s are predetermined on the basis of the specific project and contractor involved. Therefore, the WD-10s for those projects and contractors do not have any chance of being selected for verification. Labor officials in the national office told us that because of this, they do not know whether they have selected enough data for telephone and on-site verification to ensure the accuracy of the data used, or whether they have selected more data than needed and are wasting resources. As a result, they do not know the extent to which data used to calculate wage rates have been verified, if at all. For example, using the hypothetical wage data forms in figure 2, Labor would know the number of wage forms it had selected but would not know whether it was verifying wages for drywallers, electricians, or painters. If Labor had selected only one of the two wage data forms for verification, it would disregard the fact that one form reported wages for 80 drywallers and the other form reported wages for 2; it would merely report that it had verified 50 percent of the WD-10s. In one of the on-site audit reports we examined, although Labor sampled 42 percent of WD-10s, the on-site auditor reviewed 28 percent of workers’ wages and fringe benefits out of all wage data submitted for workers employed in that geographic area (390 out of 1,369). This resulted in a review of 100 percent of data used in calculating prevailing wage determinations for job classes such as stone masons, and no verification of data used in other job classes, such as painters. To select a random sample that would ensure the accuracy of the data used to determine prevailing wage rates, Labor would have to sample workers within each job classification rather than sample wage data forms as it does now, and it would have to select a sufficient number of workers within each classification. We calculated the sample size required for a statistically representative sample in order to be within 50 cents per hour of the correct wage for one area survey (see table IV.1). The table shows that Labor would need to select a sample size equal or close to the total number of workers, because data reported on the number of workers by job classification can be small. Number of additional workers for whom data need to be verified (continued) Sheetmetal Worker - Metal Bldg. Davis-Bacon Act: Process Changes Could Address Vulnerability to Use of Inaccurate Data in Setting Prevailing Wage Rates (GAO/T-HEHS-96-166, June 20, 1996). Davis-Bacon Job Targeting (GAO/HEHS-96-151R, June 3, 1996). Davis-Bacon Act: Process Changes Could Raise Confidence That Wage Rates Are Based on Accurate Data (GAO/HEHS-96-130, May 31, 1996). Addressing the Deficit: Budgetary Implications of Selected GAO Work for Fiscal Year 1996 (GAO/OCG-95-2, Mar. 15, 1995). Davis-Bacon Act (GAO/HEHS-94-95R, Feb. 7, 1994). The Davis-Bacon Act Should Be Repealed (GAO/HRD-79-18, Apr. 17, 1979). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO reviewed the: (1) Department of Labor's response to the House Appropriations Committee's directive that it verify a random sample of employers' wage data submissions and select a sample of submissions for on-site data verification; and (2) likely effect of these efforts on the accuracy and timeliness of Davis-Bacon Act wage determinations. GAO noted that: (1) in response to a Committee directive and GAO's recommendation, Labor has implemented a program to verify wage survey data submitted on standardized wage data forms by construction contractors and interested third parties, such as contractor associations and trade unions; (2) to verify these data, Labor has developed procedures to select samples of these forms for telephone verification that differ depending on whether the forms are submitted by contractors or third parties; (3) in addition, Labor has hired a private accounting firm to conduct on-site verification reviews; (4) as of September 30, 1998, the accounting firm had issued final reports for 9 of the 85 geographic area surveys scheduled for audit from April 1997 to June 1998 and had identified errors in wages reported in about 70 percent of the wage data forms reviewed; (5) in both the telephone and on-site verification processes, all data--regardless of the entity that submitted them--are verified only with the contractors; (6) even though Labor has identified and corrected numerous errors in the wage data submitted, its verification efforts will have limited impact on the accuracy of the wage determinations and will increase the time required to issue them; (7) specifically, errors the accounting firm identified and corrected in all nine area surveys averaged 76 cents per hour; (8) but, because Labor was only able to correct the limited number of wage data forms verified, which contain a small portion of the wage rates submitted, on average, changes to these wage determinations will be less than 10 cents per hour, according to Labor officials' estimates; (9) the extent to which correcting the errors found through verification will improve the accuracy of wage determinations is limited by: (a) the Committee directive to use a random sample of wage data forms for verification, given the characteristics of the wage data with respect to the universe being sampled; and (b) the procedures Labor uses to implement this directive; (10) for example, in its procedures, Labor assumes that data from contractors that refuse access to supporting documentation are correct and includes the wages in calculating wage determinations; and (11) while the time needed for verification reduced timeliness of wage determinations, telephone verification added less time to the process than did on-site verification--an estimated average of 2 weeks as compared with an average of 211 days for the 30 area surveys for which the auditor completed preliminary reports.
|
The speed, functionality, and accessibility that create the enormous benefits of the computer age can, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with computer operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. As public and private organizations use computer systems to transfer more and greater amounts of money, sensitive economic and commercial information, and critical defense and intelligence information, the likelihood increases that malicious individuals will attempt to penetrate current security technologies, disrupt or disable our nation’s critical infrastructures, and use sensitive and critical information for malicious purposes. In a May 2004 report, we discussed how cyber security technologies can provide a near-term solution for improving critical infrastructure security vulnerabilities. However, these technologies offer only single-point solutions by addressing individual vulnerabilities; they do not provide a complete solution. For example, firewalls can control the flow of traffic between networks but cannot protect against threats from within the network; antivirus software can provide some protection against viruses and worms but cannot protect the confidentiality of the data residing on the system. As a result, many researchers have described the use of these types of near-term solutions as being short-sighted. They argue that it is necessary to design systems with built-in security because it is difficult to deploy secure systems based on insecure components. In addition, researchers have indicated that long-term efforts are needed, such as researching cyber security vulnerabilities, developing technological solutions, and transitioning research results into commercially available products. Research in cyber security technology can help create a broader range of choices and more robust tools for building secure, networked computer systems. Recent cyber attacks and threats have underscored the need to strengthen and coordinate the federal government’s cyber security R&D efforts. Examples of recent attacks include the following: In November 2005, the U.S. government issued a warning about a virus disguised in an e-mail purportedly sent from the Federal Bureau of Investigation. The e-mail tells users that they have been visiting illegal Web sites and directs them to open an attachment with a questionnaire that contains a variant of the w32/sober virus. If the attachment is opened, the virus is executed. In October 2005, information security specialists reported that the Zotob worm, which had adversely affected computer networks in mid-August, had cost infected organizations an average of $97,000. Variants of the worm were capable of attacks that included logging keystrokes, stealing authentication credentials, and performing mass mailings. It was estimated that it took 61 percent of the impacted organizations more than 80 hours of work to clean up the infected systems. In March 2005, security consultants within the electric industry reported that hackers were targeting the U.S. electric power grid and had gained access to electronic control systems. Computer security specialists reported that, in a few cases, these intrusions had “caused an impact.” While officials stated that hackers had not caused serious damage to the systems that feed the nation’s power grid, the constant threat of intrusion has heightened concerns that electric companies may not have adequately fortified defenses against a potential catastrophic strike. In January 2005, a major university reported that a hacker had broken into a database containing 32,000 student and employee social security numbers, potentially compromising the identities and finances of the individuals. In similar incidents during 2003 and 2004, it was reported that hackers had attacked the systems of other universities, exposing the personal information of more than 1.8 million people. The number of malicious attacks has increased with the growing number of vulnerabilities. In 2000, the Software Engineering Institute’s CERT® Coordination Center (CERT/CC) received 1,090 reports of security vulnerabilities. By 2005, this number had more than quadrupled to 5,990. Figure 1 illustrates the number of security vulnerabilities reported from 1995 through 2005. Over the years, the federal government has taken these and other actions to improve cyber security efforts: publishing best practices and guidelines that assist in the planning, selection, and implementation of cyber security technologies; partnering with private sector counterparts to assess vulnerabilities and develop plans to eliminate those vulnerabilities; and awarding grants to support cyber security R&D. Research associated with enhancing the cyber security of critical infrastructures has been reinforced through federal requirements aimed at improving the nation’s cyber security posture. Additional requirements for research can be found in legislation that establishes agency responsibilities. For example, the act that establishes the Office of Science and Technology Policy gives the office the responsibility of assisting the President in providing general leadership and coordination of the research programs of the federal government. To provide a historical perspective, table 1 summarizes the key federal cyber security R&D actions that have shaped the development of the federal government’s cyber security R&D policies. Numerous federal agencies and organizations are involved in federally funded cyber security R&D. Several entities oversee and coordinate federal cyber security research; other groups support coordination on an informal basis; and multiple federal agencies fund or conduct this research. The Office of Science and Technology Policy and OMB, both in the Executive Office of the President, provide high-level oversight of federal R&D, including cyber security. The Office of Science and Technology Policy oversees the National Science and Technology Council, which prepares R&D strategies that are coordinated across federal agencies. The council operates through its committees, subcommittees, and interagency working groups, which coordinate activities related to specific science and technology disciplines. The Subcommittee on NITRD and the Interagency Working Group on Cyber Security and Information Assurance are the key entities responsible for coordinating cyber security R&D activities among federal agencies. The organization chart in figure 2 depicts the federal organizations involved. While this chart illustrates that several organizations are involved, much of the coordination for cyber security research is actually accomplished at lower level working groups and subcommittees by content matter experts from different agencies. Table 2 contains a brief description of the roles and responsibilities of the federal organizations and groups involved in the oversight and coordination of cyber security research. Participation by federal entities in other interagency groups provides opportunities for enhanced coordination of cyber security R&D efforts on an informal basis. The InfoSec Research Council (the Council) is a voluntary organization that is to facilitate coordination and communication of federal information security research among its members. The Council meets regularly to discuss current research projects, proposed future research initiatives, and critical information security issues. It is also responsible for producing a “hard problems list” that describes what it considers to be the most critical information security problems that, from a government perspective, should be addressed within the next 5 to 10 years. The latest version of the hard problems list was released in November 2005 and includes problems such as addressing insider threats, building secure systems, and improving information security without sacrificing privacy. The development of the list was intended to create consensus on particularly challenging information security issues that can be addressed through federal government coordination, but the Council recognizes that its members also have their own research priorities. The Technical Support Working Group also provides a means for coordination of cyber security R&D. Under the supervision of the Departments of Defense and State, the group operates with the collaboration and voluntary participation of more than 80 federal organizations in its 10 subgroups. In fulfilling its mission to conduct the national interagency R&D program for combating terrorism, the group facilitates interagency communication by serving as a forum for developing user-based counterterrorism technology requirements across the federal government. Its Infrastructure Protection subgroup meets once a year and is responsible for identifying, prioritizing, and executing R&D projects that satisfy interagency infrastructure protection requirements, including cyber security. Research and development officials at several agencies noted that, through other informal activities, they maintained additional contact with personnel at other agencies conducting cyber security R&D. Many mentioned that they participated in other agencies’ project selection and technical review panels. For example, experts from the Department of Homeland Security served on the review panel for the National Science Foundation’s 2005 Cyber Trust program. In addition, officials noted the relatively small size of the federal cyber security research community— many of the same officials attend the coordination meetings and a few officials within the community have worked at other agencies. This familiarity among cyber security experts has allowed for informal knowledge sharing and communication among agencies. While there are multiple agencies involved, three agencies fund and conduct much of cyber security R&D: the National Science Foundation and the Departments of Homeland Security and Defense. In 2004, the National Science Foundation established the Cyber Trust program to complement ongoing cyber security investments in each of its core research areas: computer and networked systems, computing and communication foundations, information and intelligence systems, shared cyber infrastructure, and information technology research. In accordance with the Cyber Security Research and Development Act, the National Science Foundation awards Cyber Trust grants for projects that (1) advance the relevant knowledge base; (2) creatively integrate research and education for the benefit of technical specialists and the general populace; and (3) effectively integrate the study of technology with the policy, economic, institutional, and usability factors that often determine its deployment and use. Recent Cyber Trust grants include research in areas such as approaches to Internet security, system behavior monitoring, and information security risk management architecture. The President’s budget for fiscal year 2006 provides about $94 million to the National Science Foundation for cyber security research, education, and training. The Department of Homeland Security’s R&D efforts are aimed at countering threats to the homeland by making evolutionary improvements to current capabilities and by developing revolutionary new capabilities. The Department of Homeland Security’s cyber security R&D program resides in the agency’s Science and Technology Directorate. According to Department of Homeland Security officials, the cyber security R&D program was funded—out of the department’s $1 billion science and technology budget—with approximately $10 million in fiscal year 2004, $18 million in fiscal year 2005, and $17 million in fiscal year 2006. The Department of Homeland Security’s cyber security R&D activities are largely unclassified and near-term. In addition, some work is funded in partnership with the National Science Foundation. Several agencies within the Department of Defense have cyber security R&D programs. The Department of Defense’s Office of the Director, Defense Research and Engineering, provides coordination and oversight in addition to supporting some cyber security research activities directly. The office is responsible for the Department of Defense’s science and technology as well as for oversight of research and engineering. According to Department of Defense officials, its cyber security research programs totaled about $150 million in fiscal year 2005. Although the Department of Defense’s research organizations (the Office of Naval Research, Army Research Laboratory, and Air Force Research Laboratory) have cyber security programs, the largest investments within its cyber security program are with the Defense Advanced Research Projects Agency and the National Security Agency. The Defense Advanced Research Projects Agency is the central R&D organization of the Department of Defense. Its mission is to identify revolutionary, high-risk, high-payoff technologies of interest to the military—and then to support their development through transition. Its portfolio has shifted toward classified and short-term R&D, and it has the authority to award cash prizes to encourage and accelerate technical accomplishments. There are two types of offices at the agency: technology offices and systems offices. The technology offices focus on new knowledge and component technologies that might have significant national security applications. Systems offices focus on technology development programs leading to products that more closely resemble a specific military end-product; that is, an item that might be in the military’s inventory. One of the technology offices (the Information Processing Technology Office) and one of the systems offices (the Advanced Technology Office) focus on cyber security research and development. The National Security Agency also performs extensive cyber security research. The research is conducted and supported by its National Information Assurance Research Group. Two of the agency’s programs— the Information Systems Security Program and Consolidated Cryptologic Program—fund the majority of its cyber security research. The research focuses on high-speed encryption and certain defense capabilities, among other things. In addition to the three primary agencies that fund or conduct cyber security R&D, other agencies, including the Department of Energy, the National Institute of Standards and Technology, and the Disruptive Technology Office, also fund or conduct this research. The Department of Energy also conducts and funds cyber security R&D. Nearly all of the Department of Energy’s cyber security R&D investments are directed toward short-term or military and intelligence applications. This work is conducted principally at the national laboratories. The National Institute of Standards and Technology’s cyber security research program is multi-disciplinary and focuses on a range of long-term to applied R&D in the creation of security standards, guidelines, and new technologies. The National Institute of Standards and Technology’s fiscal year 2006 budget estimate for cyber security was $9.1 million. The National Institute of Standards and Technology also receives funding from other agencies such as the Departments of Homeland Security and Transportation and the General Services Administration, to work on projects that are consistent with its cyber security mission. For example, it is producing, for the Department of Homeland Security, the National Vulnerability Database. According to the National Institute of Standards and Technology, it is mandated under the Federal Information Security Management Act, the Cyber Security Research and Development Act, the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act (for biometrics), and OMB’s Circular A-130, to develop standards, guidelines, and tests for use by federal agencies. Under the Federal Information Security Management Act, the National Institute of Standards and Technology also conducts security research in support of future standards and guidelines. The Disruptive Technology Office supports the development of technologies to improve the information systems and networks that are used primarily by the intelligence community. Its budget for cyber security research amounts to about $17 million; one third of this amount supports mostly unclassified academic research. However, the office typically classifies the results of this research once it is mature enough to be incorporated into tools for the intelligence community. Federal entities have taken several important steps to improve the oversight and coordination of federal cyber security R&D. These include (1) chartering an interagency working group to focus on this type of research, (2) publishing a federal plan for cyber security and information assurance research that is to provide baseline information and a framework for planning and conducting this research, (3) separating the reporting of budget information for cyber security research from other types of research, and (4) maintaining governmentwide repositories of information on R&D projects. However, limitations exist with the development of a federal cyber security research agenda, the federal plan, and populating the governmentwide repositories that, if not remedied, could diminish the effectiveness of oversight and coordination of cyber security R&D. In August 2005, the National Science and Technology Council chartered the Interagency Working Group for Cyber Security and Information Assurance. This working group succeeds the Interagency Working Group on Critical Information Infrastructure Protection, which reported to the Subcommittee on Infrastructure. The working group reports jointly to the Subcommittee on NITRD and the Subcommittee on Infrastructure. This change is to facilitate better integration of cyber security R&D with the NITRD program and reflect the broader impact of cyber security and information assurance beyond critical information infrastructure protection. According to a NITRD official, the charter of the Interagency Working Group for Cyber Security and Information Assurance was made in response to the February 2005 recommendation of the President’s Information Technology Advisory Committee to strengthen and integrate the working group under the NITRD program. In February 2003, the National Strategy to Secure Cyberspace was issued to provide a framework for organizing and prioritizing efforts to protect our nation’s cyberspace. The strategy recommended that the Director of the Office of Science and Technology Policy coordinate the development of an annual federal government cyber security research agenda that includes near-term (1–3 years), mid-term (3–5 years), and long-term (5 years and longer) research for fiscal years 2004 and beyond. In April 2006, the Cyber Security and Information Assurance Interagency Working Group released an interagency plan for cyber security research and development. The plan provides baseline information and a technical framework for coordinated multi-agency research in cyber security and information assurance. The Federal Plan for Cyber Security and Information Assurance Research and Development addresses: types of vulnerabilities, threats, and risks; analysis of recent calls for federal research and development; technical topics in cyber security and information assurance research; current technical and investment priorities of federal agencies in cyber security and information assurance research; results of technical and funding gaps analysis; research of technical topic perspectives, including assessments of the state of the art and key technical challenges; and a summary of roles and responsibilities, by agency. According to the Interagency Working Group for Cyber Security and Information Assurance, which operates under the auspices of the Office of Science of Technology Policy and the National Science and Technology Council, the Federal Plan for Cyber Security and Information Assurance Research and Development is the first step towards developing a federal agenda for cyber security research. The plan specifies the need to develop a road map for addressing identified gaps in cyber security research, but has not committed to a date when the road map would be developed or completed. Key activities for the development of an agenda have not been completed. For instance, mid-term and long-term cyber security research goals have not been defined. Further, the following activities necessary for the agenda have also not been completed: (1) specifying timelines and milestones for conducting research and development activities; (2) specifying goals and measures for evaluating research and development activities; (3) assigning responsibility for implementation, including the accomplishment of the focus areas and suggested research priorities; and (4) aligning the funding priorities with technical priorities. Until a federal agenda as called for in the National Strategy to Secure Cyberspace is developed, increased risk exists that agencies will focus on their individual priorities for cyber security research and development, which may not be the most important national research priorities. Better coordination of research and development efforts will enable the most important topics to receive priority funding and resources and avoid duplication of effort. For the first time, the NITRD program, in response to the President’s Information Technology Advisory Committee recommendation to strengthen coordination, reported budget information for cyber security research separately from other types of research in its supplement to the President’s fiscal year 2007 budget. This important change was made possible with the addition of a new NITRD program component area for cyber security and information assurance. Before this addition, budget amounts for cyber security research projects were difficult to identify because they were often grouped with the non-cyber security research projects in other program component areas. Now, program member agencies are to report budget amounts for cyber security research separately. For example, the National Science Foundation, Department of Defense agencies, and National Institute of Standards and Technology, among others, reported budget amounts for cyber security and information assurance research in the NITRD Supplement to the President’s Fiscal Year 2007 Budget. Although the NITRD supplement included budget amounts for cyber security research, this information was limited. Budget amounts for certain cyber security research activities were reported in another NITRD program component area, and budget information on cyber security research for non-NITRD members—such as the Department of Homeland Security and elements within the Department of Energy—was not included in the supplement. However, in his February 2006 testimony before the House Committee on Science, the former Department of Homeland Security Under Secretary for Science and Technology testified that the science and technology division of the Department of Homeland Security is now participating in NITRD. Further, in June 2006, the OMB issued its annual Circular A-11 budget submission guidance, which requires that agencies submit separate budget amounts for cyber security R&D as part of their 2008 budget submissions. These new requirements should increase the visibility of federal cyber security research and could provide a mechanism for determining the total federal budget in cyber security research and development. In order to improve the methods by which government information is organized, preserved, and made accessible to the public, the E- Government Act of 2002 mandated that the Director of OMB (or the Director’s delegate) ensure the development and maintenance of a governmentwide repository and Web site that integrates information about federally funded R&D. The Director delegated this responsibility to the National Science Foundation. According to the E-Government Act, the repository is to integrate information about each separate R&D task or award, including: the dates on which the task or award is expected to start and end, a brief summary describing the objective and the scientific and technical focus of the task or award, the entity performing the task or award, the amount of federal funds to be provided, and any restrictions that would prevent the sharing of information related to the task with the public. In addition, the Web site on which all or part of the repository resides is to be made available to, and be searchable by, federal agencies and non-federal entities, including the general public, and is to facilitate: the coordination of federal R&D activities; collaboration among those entities conducting federal R&D; the transfer of technology among federal agencies, and between federal agencies and non-federal entities; and access by policy makers and the public to information concerning federal R&D activities. The E-Government Act also requires agencies that fund federal R&D to provide the information needed to populate the repository in the manner prescribed by the Director of OMB. The federal government has established, and currently funds, two governmentwide repositories and Web sites for R&D information that are available to, and searchable by, federal agencies and the public: Research and Development in the United States (RaDiUS) and Science.gov. RaDiUS is a database that contains information on federally funded R&D projects. Science.gov provides information on federal research through links to science Web sites and scientific databases. The repositories generally contain the type of information about R&D tasks or awards required by the E-Government Act. Both are intended to provide the public and agencies with information about federally funded R&D activities and results. However, the RaDiUS and Science.gov repositories were incomplete and not fully populated with information about all federally funded tasks and awards. Query searches for cyber security research projects on the RaDiUS repository produced limited results. For example, we found that (1) as of March 2006, all searches on RaDiUS were limited to awards that were made during or prior to fiscal year 2004, (2) searches on RaDiUS for the Department of Homeland Security did not return any cyber-related results and returned only one project when searching for all projects, (3) searches on RaDiUS for the National Science Foundation’s Cyber Trust program produced only 8 of the 35 Cyber Trust awards listed for 2004. In addition, the Federal R&D Project Summaries database at Science.gov does not include R&D project summaries for the Departments of Homeland Security and Defense and the National Institute for Standards and Technology. As a result, the usefulness of the repositories and Web sites to facilitate the coordination of cyber security R&D activities, collaboration among researchers, and access to research information in a timely and efficient manner was limited. The governmentwide repositories were incomplete and not fully populated, in part, because OMB had not issued guidance to ensure that agencies had provided all information required for the repositories. Although OMB has issued guidance related to improving the public’s access to, and dissemination of, government information and policies for federal agency public Web sites, this guidance does not specifically address reporting information on all federally funded research and development projects to the governmentwide repositories. The E- Government Act specifies that OMB shall issue any guidance determined necessary to ensure that agencies provide all the information required by the act. Our search query results (previously described), and the fact that research and development officials at several federal agencies were not aware of the RaDiUS repository or Web site when asked about the existence of a governmentwide repository for research and development projects indicates that such guidance is necessary. Each of the three primary agencies that fund or conduct cyber security R&D has established technology transfer methods for sharing the results of the research. The following are examples of how each agency conducts technology transfer. The National Science Foundation essentially relies on the researcher or grantee to disseminate information about National Science Foundation- funded research. In accordance with the Bayh-Dole Act, the National Science Foundation allows grantees to retain principal legal rights to intellectual property developed under its grants. According to an agency official, the Grant Policy Manual provides the incentive to develop and disseminate inventions, software, and publications that can enhance their usefulness, accessibility, and upkeep. The official stated that the National Science Foundation’s policy does not, however, reduce the responsibilities of researchers and organizations to make results, data, and collections available to the research community. It was the National Science Foundation’s expectation that grantees would share data, collections, software, and inventions, making their products widely available and useful. The National Science Foundation grantees are required to submit annual and final project reports to the agency; these reports include information on dissemination activities such as publications and conferences. The Department of Homeland Security has several methods for technology transfer, such as attending conferences and workshops and working with industry in several areas to share information about emerging threats and R&D needs. In addition, agency officials stated that their Web site is another way that they share information about R&D activities. The Department of Defense has several programs to encourage the transfer of technology information. For example, within the academic world, the Department of Defense uses published peer review journals to help facilitate information sharing. Within the classified community, research is shared among the Departments of Defense and Homeland Security and the intelligence community. The Department of Defense’s small business innovation research and small business technology transfer programs are used to encourage the transfer of information to the private sector. In addition, every Armed Service research laboratory has a technology transfer office. While technology transfer exists within the Department of Defense, there are instances in which the Department of Defense does not want research information to be available to the public because the information could expose organizational and technological vulnerabilities. Several federal entities led by the Office of Science and Technology Policy and OMB are involved in overseeing, coordinating, funding, or conducting cyber security R&D. These entities have acted to enhance the oversight and coordination of federal cyber security R&D, including the formation of an interagency working group that developed a federal plan to provide a baseline of information and a technical framework for coordinated multi- agency R&D in cyber security and information assurance. However, key elements of the federal research agenda called for in the National Strategy to Secure Cyberspace have not been developed, thereby increasing the risk that mid- and longer-term research priorities may not be achieved. Without sufficient guidance on reporting R&D information for governmentwide repositories, the repositories cannot be fully populated with data on all cyber security research projects, diminishing their usefulness for coordinating research activities and facilitating technology transfer of research results. Until these issues are addressed, federal research for cyber security and information assurance may not keep pace with the increasing number of threats and vulnerabilities. To strengthen cyber security research and development programs, we recommend that the Director of the Office of Science and Technology Policy take the following action: Establish firm timelines for the completion of the federal cyber security R&D agenda that includes near-term, mid-term, and long-term research. Such an agenda should include the following elements: timelines and milestones for conducting research and development activities; goals and measures for evaluating research and development activities; assignment of responsibility for implementation, including the accomplishment of the focus areas and suggested research priorities; and the alignment of funding priorities with technical priorities. We also recommend that the Director of the Office of Management and Budget implement the following action: Issue guidance to agencies on reporting information about federally funded cyber security R&D projects to the governmentwide repositories. A Senior Policy Analyst in the Office of Science and Technology Policy provided technical comments on a draft of this report, but did not comment on our recommendation that the office establish timelines for the completion of the federal cyber security R&D agenda. We have considered and incorporated the technical comments into the report as appropriate. In providing oral comments on a draft of this report, OMB officials stated that OMB’s August 2006 Fiscal Year 2006 E-Government Act Reporting Instructions require agencies that fund federal R&D activities to describe how they fulfill their responsibilities under section 207(g) of the E- Government Act, including how their R&D information is available through RaDiUS, science.gov, or other means. The officials stated that after reviewing the agencies’ reports and other information, they will consider whether specific guidance is necessary to further ensure agencies provide all R&D information as required under section 207(g) of the E- Government Act. In addition, they were concerned with the report’s limited scope—cyber security R&D—and stated that the requirement to specify and report cyber security as a separate category of R&D is a recent change and therefore might bias the report’s findings. We acknowledge that the scope of our review was limited to cyber security R&D which is why we limited the scope of our findings and recommendations to cyber security R&D. The recent change in reporting requirements relates to the reporting of budgetary information and does not affect our finding on reporting project information to the central repositories. The National Science Foundation and the National Institute of Standards and Technology provided technical comments, which we have incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will then send copies of this report to the Directors of the Office of Science and Technology Policy, OMB, and National Science Foundation; to the Secretaries of the Departments of Homeland Security and Defense; and to other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or members of your staff have questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Keith A. Rhodes at (202) 512-6412. We can also be reached by e-mail at [email protected] and [email protected], respectively. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Our objectives were to identify the (1) federal agencies that are involved with cyber security research and development (R&D); (2) actions taken to improve oversight and coordination of cyber security research and development, including the development of a federal research agenda; and (3) methods used for technology transfer at the agencies with significant activities in cyber security research and development. To identify which agencies are involved in federal cyber security R&D, we researched a key report on cyber security R&D from the President’s Information Technology Advisory Committee. We also analyzed relevant federal law and policy, including the Cyber Security Research and Development Act, the National Strategy to Secure Cyberspace, and Homeland Security Presidential Directive 7; we also reviewed our prior reports. We then reviewed budget documents from the Subcommittee on Networking and Information Technology Research and Development (NITRD) to determine the key agencies that fund and conduct cyber security R&D. To identify actions taken to improve oversight and coordination of federal cyber security R&D, including the development of a governmentwide research agenda, we interviewed officials at the National Science Foundation, the National Institute of Standards and Technology, the National Security Agency, the Departments of Defense and Homeland Security, the Subcommittee on NITRD, the Technical Support Working Group, the Office of Science and Technology Policy, and the Infosec Research Council. We also reviewed NITRD budgetary documents, examined federal policy, reviewed the Office of Management and Budget reports and guidance, observed meetings and reviewed meeting agendas and minutes to determine the extent of coordination for federal cyber security R&D. To evaluate the development of a governmentwide research agenda, we reviewed the National Strategy to Secure Cyberspace to determine the requirements for the annual federal cyber security R&D agenda and compared them to the Federal Plan for Cyber Security and Information Assurance Research and Development issued by the Interagency Working Group on Cyber Security and Information Assurance. To evaluate the completeness of the RaDiUS repository, in March 2006, we executed search queries on “cybersecurity”, “cyber security”, “cyber”, “cyber trust” and “information assurance” to determine whether the database contained cyber-related program data for the federal agencies. To evaluate the completeness of the Science.gov repositories, in August and September 2006, we executed search queries on “cybersecurity”, “cyber security”, and “information assurance” to determine whether the database contained cyber-related program data for the federal agencies. We compared the results to the list of cyber projects provided by the individual agencies. We did not validate the data returned with the agencies conducting cyber security research. In addition, we analyzed relevant laws, including the E-Government Act of 2002 and interviewed officials at the National Science Foundation, the National Institute of Standards and Technology, the National Security Agency, and the Departments of Defense and Homeland Security to evaluate the completeness of the two mandated governmentwide repositories. To identify methods used for technology transfer at the agencies with significant cyber security research activities, we identified the agencies and other groups that have responsibility for management and oversight of federal cyber security R&D, interviewed officials at these agencies to determine their methods for technology transfer, and reviewed agency policies on technology transfer. We also analyzed relevant laws, including the Bayh-Dole Act. We conducted our work from August 2005 through August 2006 in accordance with generally accepted government auditing standards. In addition to the individuals named above, Kristi Dorsey, Nalani Fraser, Nancy Glover, Richard Hung, Anjalique Lawrence, and Suzanne Lightman were key contributors to this report.
|
Research and development (R&D) of cyber security technology is essential to creating a broader range of choices and more robust tools for building secure, networked computer systems in the federal government and in the private sector. The National Strategy to Secure Cyberspace identifies national priorities to secure cyberspace, including a federal R&D agenda. GAO was asked to identify the (1) federal entities involved in cyber security R&D; (2) actions taken to improve oversight and coordination of federal cyber security R&D, including developing a federal research agenda; and (3) methods used for technology transfer at agencies with significant activities in this area. To do this, GAO examined relevant laws, policies, budget documents, plans, and reports. Several federal entities are involved in federal cyber security research and development. The Office of Science and Technology Policy and OMB establish high-level research priorities. The Office of Science and Technology Policy is to coordinate the development of a federal research agenda for cyber security and oversee the National Science and Technology Council, which prepares R&D strategies that are to be coordinated across federal agencies. The Council operates through its committees, subcommittees, and interagency working groups, which oversee and coordinate activities related to specific science and technology disciplines. The Subcommittee on Networking and Information Technology Research and Development and the Cyber Security and Information Assurance Interagency Working Group are prominently involved in the coordination of cyber security research. In addition, other groups provide mechanisms for coordination of R&D efforts on an informal basis. The National Science Foundation and the Departments of Defense and Homeland Security fund much of this research. Federal entities have taken several important steps to improve the oversight and coordination of federal cyber security R&D, although limitations remain. Actions taken include chartering an interagency working group to focus on cyber security research, publishing a federal plan for guiding this research, reporting budget information for this research separately, and maintaining repositories of information on R&D projects. However, a federal cyber security research agenda has not been developed as recommended in the National Strategy to Secure Cyberspace and the federal plan did not fully address certain key elements. Further, the repositories do not contain information about all of the federally funded cyber security research projects in part because OMB had not issued guidance to ensure that agencies provided all information required for the repositories. As a result, information needed for oversight and coordination of cyber security research activities was not readily available. Federal agencies use a variety of methods for sharing the results of cyber security research with federal and private organizations (technology transfer), including sharing information through agency Web sites. Other methods include relying on the researcher to disseminate information about his or her research, attending conferences and workshops, working with industry to share information about emerging threats and research, and publishing journals to help facilitate information sharing.
|
When Social Security was enacted in 1935, the nation was in the midst of the Great Depression. About half of the elderly depended on others for their livelihood; roughly one-sixth received public charity. Many had lost their savings in the stock market crash. Social Security was created to help ensure that the elderly had adequate income and did not depend on welfare. It would provide benefits that workers had earned with their contributions and the help of their employers. In creating Social Security, the Congress recognized an immediate need to bolster the income of the elderly; an individual retirement savings approach would not have significantly affected retirement income for years to come. The Social Security benefits that early beneficiaries received significantly exceeded their contributions, but even the very first beneficiaries had made some contributions. The Social Security Act of 1935 included a companion welfare program to help the elderly who had not earned retired worker benefits under Social Security. Initially, very few of the elderly qualified for Social Security benefits; therefore, funding the benefits required relatively low payroll taxes. Increases in payroll taxes were always anticipated to keep up with the benefit payments as the system “matured” and more retirees received benefits. From the beginning, Social Security was financed on this type of “pay-as-you-go” basis, with any 1 year’s revenues collected primarily to fund benefits to be paid that year. The Congress had rejected the idea of “advance funding” for the program, or collecting enough revenues to cover future benefit rights as workers accrued them. Many feared that if the federal government amassed huge reserve funds, it would just find a way to spend them elsewhere. Over the years, the size and scope of the program have changed. In 1939, coverage was extended to dependents and survivors. In the 1950s, state and local governments were given the option of covering their employees. The Disability Insurance program was added in 1956. The Medicare program was added in 1965. Beginning in 1975, benefits were automatically tied to the Consumer Price Index to ensure that the purchasing power of recipients’ income was not eroded by inflation. These benefit expansions also contributed to higher payroll tax rates. Today, Social Security has met the goal of its creators in that it provides the foundation for retirement income. In 1994, about 91 percent of all elderly households received Social Security benefits, compared with 67 percent who received some income from saved assets, just over 40 percent who had income from pensions, and 21 percent who had earned income. Social Security contributed over 40 percent of all elderly income, compared with about 18 percent each for the other sources. It provided the predominant share of income for the lowest three-fifths of the U.S. income distribution. On average, Social Security provided $9,200 to all elderly households. The other sources of retirement income determine which households have the highest income. Social Security has contributed substantially to reducing poverty rates for the elderly, which declined dramatically from 35 percent in 1959 to under 11 percent in 1996. In comparison, 11.4 percent of those aged 18 to 64 and 20.5 percent of those under 18 were in poverty in 1996. For over half the elderly, income other than Social Security amounted to less than the poverty threshold in 1994. Still, pockets of poverty do remain. About 30 percent of elderly households are considered poor or nearly poor, having incomes below 150 percent of the poverty threshold. Women, unmarried people, and people aged 75 and over are much more likely to be poor than are other elderly persons. In fact, unmarried women make up over 70 percent of poor elderly households, compared with only 45 percent of all elderly households. In the United States, the elderly population (those aged 65 and older) grew from about 9 million in 1940 to about 34 million in 1995, and it is expected to reach 80 million by 2050, according to Bureau of the Census projections. Moreover, the very old population (those aged 85 and older) is expected to increase almost fivefold, from about 4 million in 1995 to nearly 19 million in 2050. (See fig. 1.) As a share of the total U.S. population, the elderly population grew from 7 percent in 1940 to 12 percent in 1990; this share is expected to increase to 20 percent by 2050. Although the baby-boom generation will greatly contribute to the growth of the elderly population, other demographic trends are also important. Life expectancy has increased continually since the 1930s, and further improvements are expected. In 1940, 65-year-old men could expect to live another 12 years, and women could expect to live another 13 years. By 1995, these numbers had improved to 15 years for men and 19 for women. By 2040, these numbers are projected to be 17 years and 21 years, according to SSA’s intermediate actuarial assumptions. Note that these assumptions yield a lower rate of elderly population growth than do the Census assumptions. Some demographers project even more dramatic growth. A falling fertility rate is the other principal factor in the growth of the elderly’s share of the population. The fertility rate was 3.6 children per woman in 1960. The rate has declined to around 2.0 children per woman today and is expected to level off at about 1.9 by 2020, according to SSA’s intermediate assumptions. Increasing life expectancy and falling fertility rates in combination mean that fewer workers will be contributing to Social Security for each aged, disabled, dependent, or surviving beneficiary. There were 3.3 workers for each Social Security beneficiary in 1995, but by 2030, only 2.0 workers are projected for each beneficiary (see fig. 2). These demographic trends have fundamental implications for Social Security, other forms of retirement income, and our economy as a whole. Increasing longevity means that each year more people will receive Social Security benefits. As a result, Social Security revenues must be increased, benefits must be reduced, or both. For pensions and retirement savings, increasing longevity means these income sources will have to provide income over longer periods, which will similarly require increased contributions or reduced retirement income. As already noted, there will be relatively fewer workers to pay the Social Security taxes needed to fund benefits. However, more fundamentally, unless retirement patterns change, there will be relatively fewer workers producing the goods and services that both workers’ households and elderly households will consume. Yet in recent years, workers have been retiring earlier, not later, and not always by choice. These demographic trends also pose challenges for our long-term budget outlook. They will lead to higher costs in Medicare and Medicaid as well as in Social Security. In a recent report to the Chairmen of the Senate and House Budget Committees, we discussed the results of our latest simulations of the long-term budget outlook. Recent congressional action to bring about a balanced budget and surplus in the next 10 years will give us some breathing room, but spending pressures in these programs, if left unchecked, will prompt the emergence of unsustainable deficits over the longer term. These demographic trends pose long-term financing challenges for both Social Security and the federal budget. Social Security revenues are expected to be about 14 percent less than expenditures over the next 75 years, and demographic trends suggest that this imbalance will grow over time. In 2029, the Social Security trust funds are projected to be depleted. From then on, Social Security revenues are expected to be sufficient to pay only about 70 to 75 percent of currently promised benefits, given currently scheduled tax rates and SSA’s intermediate assumptions about demographic and economic trends. In 2031, the last members of the baby-boom generation will reach age 67, when they will be eligible for full retirement benefits under current law. While Social Security funds are expected to be sufficient to pay full benefits for more than 30 years, Social Security’s financing will begin having significant implications for the federal budget in only 10 years. Moreover, restoring Social Security’s long-range financial balance would not necessarily address the significant challenge that its current financing arrangements pose for the overall federal budget. Social Security cash revenues currently exceed expenditures by roughly $30 billion each year (see fig. 3). Under current law, the Department of the Treasury issues interest-bearing government securities to the trust funds for these excess revenues. In effect, Treasury borrows Social Security’s excess revenues and uses them to help reduce the amount it must borrow from the public. In other words, Social Security’s excess revenues help reduce the overall, or unified, federal budget deficit. Moreover, the trust funds earned $38 billion in interest last year, which Treasury pays by issuing more securities. If Treasury could not borrow from the trust funds, it would have to borrow more in the private capital market and pay such interest in cash to finance current budget policy. Ten years from now, these excess cash revenues are expected to start diminishing, and so will their effect in helping balance the budget. In just 15 years, Social Security’s expenditures are expected to exceed its cash revenues, and the government’s general fund will have to make up the difference, in effect repaying Social Security. As a result, Social Security’s cash flow will no longer help efforts to balance the budget but will start to hinder them. In 2028, repayments from the general fund to Social Security are expected to reach about $183 billion in 1997 dollars. In that year, this amount would equal the same share of gross domestic product as the deficit for the entire federal government in fiscal year 1996, or 1.4 percent, according to SSA projections. Restoring Social Security’s long-term financial balance will require some combination of increased revenues and reduced expenditures. A variety of options is available within the current structure of the program. However, some proposals would go beyond restoring financial balance and fundamentally alter the program structure. These more dramatic changes attempt to achieve other policy objectives as well. The current Social Security system attempts to balance two competing policy objectives. The progressive benefit formula tries to ensure the “adequacy” of retirement income by replacing a higher portion of lower earners’ income than of higher earners’ income. In effect, Social Security redistributes income from higher earners to lower earners. At the same time, the formula attempts to maintain some degree of “equity” for higher earners by providing that benefits increase somewhat with earnings. Within the current program structure, a wide range of options is available for reducing costs or increasing revenues. Previously enacted reforms have used many of these in some form. Current reform proposals also rely, at least in part, on many of these more traditional measures, regardless of whether the proposals largely preserve the current program structure or alter it significantly. Ways to reduce program expenditures include reducing initial benefits by changing the benefit formula for all or some beneficiaries, for example, by changing the number of years of earnings used in the formula; raising the retirement age or accelerating the currently scheduled increase; lowering the annual automatic cost-of-living adjustment; and means-testing benefits, or limiting benefits on the basis of beneficiaries’ other income and assets. Ways to increase revenues include increasing Social Security payroll taxes, investing trust funds in securities with potentially higher yields than the government bonds in which they are currently invested, and increasing income taxes on Social Security benefits. A variety of proposals would address Social Security’s long-term funding problems by significantly restructuring the program, usually by privatizing at least a portion of it. Such proposals still essentially achieve financial balance by effectively raising revenues and reducing costs, but they do so in ways that pursue other policy objectives as well. Some of these proposals would reduce the role of Social Security and the federal government in providing retirement income and would give individuals greater responsibility and control over their own retirement incomes. These proposals often focus on trying to improve the rates of return that individuals earn on their retirement contributions and thus place greater emphasis on the equity objective. Also, some proposals focus on trying to increase national saving and on funding future Social Security benefits in advance rather than on the current pay-as-you-go basis. In this way, the relatively larger generation of current workers could finance some of their future benefits now rather than leaving a relatively smaller generation of workers with the entire financing responsibility. Moreover, the investment earnings on the saved funds would reduce the total payroll tax burden. Generally, privatization proposals focus on setting up individual retirement savings accounts and requiring workers to contribute to them. The accounts would usually replace a portion of Social Security, whose benefits would be reduced to compensate for revenues diverted to the savings accounts. Some privatization proposals combine new mandatory saving and Social Security benefit cuts, hoping to produce a potential net gain in retirement income. The combination of mandated savings deposits and revised Social Security taxes would be greater than current Social Security taxes, in most cases. Virtually all proposals addressing long-term financing issues would increase the proportion of retirement assets invested in the stock market or in other higher-risk investments. Some proposals call for the accounts to be managed by individuals, while others would have them managed by the government. The common objective is to finance a smaller share of retirement costs with worker contributions and a larger share of the costs with anticipated higher investment returns. In the case of individual savings accounts, workers would bear the risk of economic and market performance. Individuals with identical earning histories and retirement contributions could have notably different retirement incomes because of market fluctuations or individual investment choices. Some proposals would require retirees to purchase a lifetime annuity with their retirement savings to ensure that the savings provided income throughout their retirement. Privatization proposals raise the issue of how to make the transition to a new system. Social Security would still need revenues to pay benefits that retirees and current workers have already earned, yet financing retirement through individually owned savings accounts requires advance funding. The revenues needed to fund both current and future liabilities would clearly be higher than those currently collected. For example, to fund the transition, one proposal would increase payroll taxes by 1.52 percent for 72 years and involve borrowing $2 trillion during the first 40 years of the transition. Privatization would also have a significant effect on the distribution of retirement income between high and low earners. The current Social Security benefit formula redistributes income and implicitly gives low earners a somewhat higher rate of return on their contributions than high earners. Privatization proponents claim that all earners would be better off under privatization, although high earners would have relatively more to gain from any increased rates of return that privatization might provide. Moreover, if workers were contributing to their own retirement savings, their contributions would not be available for redistribution as they are now. Some privatization proposals would retain some degree of Social Security coverage and therefore permit some redistribution to continue. Privatization proposals also tend to separate retirement benefits from Social Security’s survivors’ and disability benefits. In the case of death or disability before retirement, individual savings may not have been building long enough to sufficiently replace lost income. Some privatization proposals, therefore, leave these social insurance programs largely as they are now. Financing reforms could affect the nation’s economy in various ways. For example, raising the retirement age could affect the labor market for elderly workers. Also, if reforms increased national saving, they could help increase investment, which in turn could increase productivity and economic growth. Economic growth could help ease the strains of providing for a growing elderly population. However, reforms may not produce notable increases in national saving since, to some degree, any new retirement saving might simply replace other forms of individual saving. Moreover, any additional Social Security savings in the federal budget would add to national saving only if they were not offset by other budget initiatives. Reforms would affect other sources of retirement income and related public policies as well. For example, raising payroll taxes could affect the ability of workers to save for retirement, especially if these increases were combined with tax increases enacted to help with Medicare or Medicaid financing. Raising Social Security’s retirement age or cutting its benefit amounts could increase costs for private pensions that adjust benefits in relation to Social Security benefits. Reforms would also interact with other income support programs, such as Social Security’s Disability Insurance program or the Supplemental Security Income public assistance program. Reforms could have effects both immediately and far into the future. For example, bringing newly hired state and local government workers into the Social Security system would immediately increase revenues but would increase benefit payments only when the newly covered workers retired. However, even changes that take effect years from now can affect how workers plan now for their retirement, especially how much they choose to save. Therefore, the sooner solutions are enacted, the more time workers will have to adjust their retirement planning. Acting sooner rather than later also would mean that the funding shortfall could be addressed over a longer period at a lower annual cost. Finally, any financing reforms would implicitly have distributional effects. For example, increasing Social Security taxes would reduce the disposable income of current workers but would help sustain retirement benefits for retirees in the future. Cutting benefits instead of increasing payroll taxes would have the opposite distributional effect. Also, Social Security redistributes income from high to low earners to some degree; some reforms would change this. In particular, reform proposals vary considerably in their effects on specific subpopulations, some of which are at greater risk of poverty, such as older women and unmarried women. For example, since men and women have different earnings histories, life expectancies, and investment behaviors, reforms could exacerbate differences in benefits that already exist. Ensuring that Americans have adequate retirement income in the 21st century will require that the nation and the Congress make some difficult choices. Social Security has been effective in ensuring a reliable source of income in retirement and greatly reducing poverty among the elderly, and reforms will determine what role it will play in the future. The effect of reforms on other retirement income sources and on various groups within the aged population should be well understood when making reforms. Also, the demographic trends underlying Social Security’s financing problem are contributing significantly to increasing cost pressures for Medicare and Medicaid. Federal budget policy faces a profound challenge from the tremendous imbalance between these promised entitlements and the revenues currently planned to fund them. While Social Security’s financing is projected to pay full benefits until 2029, it will start to pose challenges for the federal budget much earlier—only 10 years from now. This fact illustrates the critical importance of examining how budget policy interacts with proposed reforms to Social Security and other entitlements. It also illustrates the need to act sooner rather than later. Timely policy adjustments can help us get onto a more sustainable fiscal path and may even help increase national saving and promote economic growth, which could ease the adjustments that current demographic trends will require. This concludes my testimony. I will be happy to answer any questions. Budget Issues: Analysis of Long-Term Fiscal Outlook (GAO/AIMD/OCE-98-19, Oct. 22, 1997). 401(k) Pension Plans: Loan Provisions Enhance Participation but May Affect Retirement Income Security for Some (GAO/HEHS-98-5, Oct. 1, 1997). Retirement Income: Implications of Demographic Trends for Social Security and Pension Reform (GAO/HEHS-97-81, July 11, 1997). Social Security Reform: Implications for the Financial Well-Being of Women (GAO/T-HEHS-97-112, Apr. 10, 1997). 401(k) Pension Plans: Many Take Advantage of Opportunity to Ensure Adequate Retirement Income (GAO/HEHS-96-176, Aug. 2, 1996). Social Security: Issues Involving Benefit Equity for Working Women (GAO/HEHS-96-55, Apr. 10, 1996). Federal Pensions: Thrift Savings Plan Has Key Role in Retirement Benefits (GAO/HEHS-96-1, Oct. 19, 1995). Social Security Retirement Accounts (GAO/HEHS-94-226R, Aug. 12, 1994). Social Security: Analysis of a Proposal to Privatize Trust Fund Reserves (GAO/HRD-91-22, Dec. 12, 1990). Social Security: The Trust Fund Reserve Accumulation, the Economy, and the Federal Budget (GAO/HRD-89-44, Jan. 19, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed: (1) the demographic trends contributing to the social security financing problem; (2) when the problem will begin to confront the federal government; (3) the alternatives for addressing the problem; and (4) the implications of these alternatives. GAO noted that: (1) increasing life expectancy and declining fertility rates pose serious challenges not just for the social security system but also for Medicare, Medicaid, the federal budget, and the economy as a whole; (2) the aging of the baby-boom generation will simply accelerate this trend; (3) social security receives more from payroll taxes than it pays out in benefits; (4) this excess revenue is helping build substantial trust fund reserves that are projected to help pay full benefits until 2029, according to social security's intermediate projections; (5) at the same time, this excess revenue helps reduce the overall federal budget deficit but will start to taper off after 2008; (6) in 2012, social security benefit payments are projected to exceed cash revenues, and the federal budget will start to come under considerable strain as the general fund starts to repay funds borrowed from the trust funds; (7) although social security's revenues currently exceed its expenditures, revenues are expected to be about 14 percent less than total projected expenditures over the next 75 years, according to Social Security Administration projections; (8) a variety of benefit reductions and revenue increases within the current program structure could be combined to restore financial balance; (9) some observers believe that the program structure should be reevaluated; (10) reform is necessary, and the sooner it is addressed, the less severe the necessary adjustments will be; (11) any economic growth and improvements in living standards achieved will also mitigate the strains that reform will impose; (12) any course taken will substantially affect both workers and retirees, other sources of retirement income, the income distribution, the federal budget, and even the economy as a whole; and (13) such effects should be well understood in making reforms.
|
VA physicians prescribed OTC products for veterans more than 7 million times in fiscal year 1995, accounting for almost one-fifth of all prescriptions. VA pharmacies filled these OTC prescriptions over 15 million times, about one-fourth of all prescriptions filled. VA physicians prescribed more than 2,000 unique OTC products. VA pharmacies classify these products into three groups: medications such as antacids, medical supplies such as insulin syringes, and dietary supplements such as Ensure. Medications account for about 73 percent of the 15 million OTC prescriptions filled, medical supplies for 26 percent, and dietary supplements for less than 1 percent. VA’s network and facility directors have considerable freedom in developing operating policies, procedures, and practices for VA physicians and pharmacies. They and the pharmacies have taken a number of different actions to limit the number of OTC products available through the pharmacies and the quantity of products veterans can receive. However, little uniformity in the application of limits is evident. In general, each facility has a Pharmacy and Therapeutics Committee that decides which OTC products to provide based on product safety, efficacy, and cost effectiveness. These products are listed on a formulary and VA physicians are generally to prescribe only these products. Of the 2,000 unique OTC products dispensed systemwide, individual pharmacies generally handled fewer than 480, with the number of OTC products ranging between 160 and 940 products. Medical supplies account for the majority of unique products, with pharmacies generally dispensing fewer than 10 types of dietary supplements. However, three facilities’ formularies excluded dietary supplements. The volume of OTC products dispensed also varied among facilities. Overall, OTC products accounted for about 25 percent of all prescriptions filled systemwide. But OTC products represented between 7 percent and 47 percent of all prescriptions dispensed at individual facilities. Of note, fewer than 100 products were involved in more than 80 percent of the 15 million times that OTC products were dispensed. The most frequently dispensed OTC products include (1) medications such as aspirin, acetaminophen, insulin, and stool softener; (2) dietary supplements including Sustacal and Ensure; and (3) supplies such as alcohol prep pads, lancets, and chemical test strips. Facilities have sometimes restricted physicians’ prescriptions of OTC products to veterans with certain conditions or within certain eligibility categories. For example, 115 facilities restricted dietary supplements to veterans who required tube feedings or received approval for the supplement from dieticians. For medical supplies, a facility provided certain supplies only to patients who received them when hospitalized and another provided diapers only to veterans with service-connected conditions. One facility provided OTC medications only to veterans with service-connected disabilities. Facilities have sometimes restricted the quantities of OTC products that pharmacies may dispense. Twenty-eight facilities had restrictions, including limits on the quantity of OTC products dispensed within prescribed time periods or the number of times a prescription could be refilled. For example, one facility restricted cough syrup prescriptions to an 8-ounce bottle with one refill. It had similar quantity restrictions for 15 other OTC medications. Another facility had a no-refill policy for certain medical supplies, such as diapers, underpads, and bandages. The Department of Defense operates a health care system for military beneficiaries, including active duty members, retired members, and dependents. This system provides a more restricted number of OTC products than most VA facilities. In 1992, Defense eliminated all OTC products except for insulin from its formularies to control costs. However, more expensive prescription medications were being substituted for some OTC medications that were no longer available. Subsequently, Defense reinstated a few products to its formularies to alleviate such substitution. All beneficiaries are eligible for OTC products without a copayment. OTC medications for its beneficiaries. Like VA, Medicaid, at the option of the states, can cover OTC products for its low-income beneficiaries. The availability of OTC products varies by state, ranging from very few to a substantial array of products. The Federal Employees Health Benefits program offers a range of health insurance plans to federal employees and their dependents. The program requires plans to meet certain minimal standards, which include prescription medications but no OTC products, except for insulin and related supplies. Blue Cross and Blue Shield and Kaiser Permanente are two of the larger plans and they cover no OTC products, other than insulin and related supplies. Both plans require beneficiary payments, with Kaiser charging $7 for each prescription provided in its pharmacy and Blue Cross and Blue Shield requiring a $50 deductible and 15 to 20 percent of individual prescription costs, depending on whether the beneficiary has a high- or low-option plan. Finally, most private health insurers generally exclude OTC products as a benefit for participants, with a few exceptions such as insulin and insulin syringes. For example, the Group Health Cooperative of Puget Sound, in Seattle, provides insulin with a $5 copayment but no other OTC products. Before 1995, the Group Health Cooperative of Puget Sound did provide an OTC drug benefit. However, it dropped the OTC medication benefit because it found no other similar health plan that provided this benefit. Nationwide, VA pharmacies spent an estimated $117 million to purchase OTC products and $48 million to dispense them to veterans in fiscal year 1995. Pharmacies spent about $85 million on medications, with purchasing cost representing about two-thirds of total costs. By contrast, they spent about $74 million for medical supplies and $6 million on dietary supplements, with purchasing costs accounting for most of these costs, as shown in figure 1. Purchasing and dispensing costs differ among the product categories for two reasons. First, VA physicians generally provide veterans more prescriptions for medications than supplies, thereby causing pharmacies to handle medications more often. Second, ingredient costs of medications are generally significantly lower than the cost of medical supplies. VA recovered an estimated $7 million of these costs through veterans’ copayments. By law, unless they meet statutory exemption criteria, veterans are to pay $2 for each 30-day supply of OTC medications and dietary supplements that VA provides. Veterans’ copayments are not required for OTC products used to treat service-connected conditions. Also, veterans are exempt from the copayment requirement if they have low incomes. Our analysis of veterans’ copayments and pharmacy costs at VA’s Baltimore facility shows that copayments offset no more than 12 percent of costs for medications, dietary supplements, and medical supplies, as shown in table 1. Federal funds financed most of Baltimore’s OTC product costs. Copayments collected cover a relatively small portion of these costs, for several reasons. First, the $2 copayment collected for a 30-day supply represents only a portion of the ingredient, dispensing, and collection costs of most OTC medications and dietary supplements. Second, copayments are not required for medical supplies. Third, most veterans receiving medications and dietary supplements are exempted, and some nonexempt veterans do not pay copayments owed. For individual OTC products, veterans’ medication copayments ranged between 4 percent to more than 100 percent of VA’s costs, depending on the type of OTC product and the quantities dispensed. For example, a veteran’s medication copayment of $6 for a 90-day supply of an expensive product, such as the dietary supplement Ensure, may cover less than 5 percent of VA’s costs ($400). By contrast, a veteran’s copayment of $6 for a 90-day supply of an inexpensive medication, such as aspirin, may cover more than VA’s total cost. There is a variety of actions available that could help reduce the level of federal resources devoted to the provision of OTC products. First, if VA eligibility rules were more strictly enforced, VA pharmacies could dispense considerably fewer OTC products. Also, savings could be achieved through more efficient OTC dispensing and copayment collection processes. Finally, the Congress could expand the copayment requirements to generate additional revenues. The Congress has limited VA’s authority to provide outpatient medical care to veterans. Only veterans with service-connected conditions rated at 50 percent or higher are eligible for comprehensive outpatient care. All veterans with service-connected conditions are eligible for treatments related to those conditions; they are also eligible for hospital-related care of nonservice-connected conditions. This includes only outpatient services needed to (1) prepare for a hospital admission, (2) obviate the need for a hospital admission, or (3) complete treatment begun during a hospital stay. Most veterans with no service-connected conditions are eligible only for hospital-related outpatient care. VA is required to assess a veteran’s eligibility for care based on the merits of his or her unique situation each time that the veteran seeks care for a new condition. We have identified many instances in which OTC products are used for pre- and posthospitalization care. For example, veterans received OTC products, such as phosphate enemas, magnesium citrate, and prep kits needed for barium enemas in preparation for colonoscopies and other diagnostic tests. Following hospital stays, veterans received ostomy supplies after some surgeries, wound-care supplies, aspirin for heart surgery or angioplasties, and decongestants after sinus surgery. “. . . shall be based on the physician’s judgment that the medical services to be provided are necessary to evaluate or treat a disability that would normally require hospital admission, or which, if untreated, would reasonably be expected to require hospital care in the immediate future” In other words, VA physicians must determine that a veteran would likely need to be hospitalized soon if OTC products are not used. Some OTC products may be used to obviate the need for hospital care. For example, diabetic veterans use insulin to control their blood sugar, spinal cord and Parkinson’s patients use stool softeners to alleviate fecal impaction, veterans suffering renal failure use sodium bicarbonate tablets to balance their electrolytes, and veterans who have suffered heart attacks or strokes use aspirin to prevent secondary occurrences. However, whether many veterans’ conditions would require hospitalization in the immediate future without the use of other OTC products is not clear. Such products include antacids for heartburn, skin preparation products for dry skin, acetaminophen for arthritis pain, and cough medications for common colds. Given that VA pharmacies filled prescriptions for such products over 2 million times last year, VA facilities may have the opportunity to achieve significant cost reductions if eligibility rules are more strictly enforced. VA pharmacies could more efficiently dispense OTC products by reducing the number of times staff handle these items or restricting mail service. VA facilities could also reduce costs by collecting medication copayments at the time of dispensing. VA pharmacies could significantly reduce their OTC product dispensing costs of $48 million by providing more economical quantities of medications and supplies. Dispensing larger quantities would reduce the number of times that VA pharmacists fill prescriptions for OTC products, saving about $3 each time the products would have otherwise been dispensed. As previously discussed, VA physicians generally prescribe OTC products to treat acute or chronic conditions or prevent future illness. Prescriptions for acute conditions are generally for periods of 30 days or less. However, OTC products used for chronic or preventative situations are generally prescribed for longer periods. For example, in fiscal year 1995, about 1,800 veterans received aspirin at the Baltimore pharmacy in quantities sufficient for at least 6 months. VA allows pharmacies to dispense most OTC products in quantities sufficient for a 90-day supply. However, 15 pharmacies currently dispense OTC products in 30-day or 60-day supplies. Moreover, limiting pharmacies to dispensing a 90-day supply is uneconomical for certain high-volume OTC products used to treat chronic conditions or prevent illness. OTC products used to treat chronic conditions or prevent illnesses seem to provide opportunities to reduce dispensing costs. For example, we estimate that VA’s Baltimore pharmacy could have saved over $8,000 in dispensing costs if it dispensed 180-day supplies of aspirin to certain veterans in fiscal year 1995. Assuming a prescribed usage of 1 tablet a day, this supply level of 180 tablets would be more consistent with the quantities available in local outlets, which generally range between 100 and 500 tablets. VA pharmacies could reduce dispensing costs by restricting the availability of mail service to certain situations or requiring veterans to pay shipping charges. Last year, VA pharmacies spent about $7.5 million mailing OTC products to veterans. VA pharmacies generally encourage veterans to use mail service when having most prescriptions for OTC products refilled. Almost all pharmacies mail OTC products, and mail service was used for almost 60 percent of the 15 million times that OTC products were dispensed last year. Some pharmacies have already transferred most of their OTC prescription refills to VA’s new regional mail service pharmacies and others will do so when additional regional pharmacies become operational. While mailing costs vary, they can be particularly costly for liquid items or items that are dispensed in large packages or for long periods. For example, one facility reported that mailing a prescription of liquid antacids from the pharmacy costs $2.88 and mailing a case of adult diapers costs $17.49. Mailing costs for a year’s supply of diapers could exceed $200. Some VA facilities cited high mailing costs as one of the principal reasons for eliminating OTC products from their formularies. Several facilities have attempted to reduce mailing costs by prohibiting the mailing of certain OTC products, such as cases of liquid dietary supplements and diapers. In addition, some facilities reported switching from liquid products to powders to reduce the weight—and associated mailing costs—for particular OTC products. months past the end of the fiscal year. The veterans who had not paid for these products had not applied for waivers and, as a result, VA officials view them as able to pay. VA facilities incur additional administrative costs to prepare and mail bills for copayments related to OTC products. VA facilities generally send an initial bill and three follow-up bills to veterans who are delinquent in paying. However, because of the relatively small outstanding balance for most veterans, VA officials told us that they are reluctant to continue contacting nonpayers or pursue legal or other actions to collect these debts. By law, VA has the option of not providing OTC products if a veteran refuses to make a medication copayment at the time the product is dispensed. VA officials, however, told us that it is not their policy to withhold OTC products from nonpayers for this reason. Administrative costs are significant in relation to the total copayment collections. A VA-sponsored study estimated that VA facilities spend about 38 cents for every $1 collected to prepare medication copayment bills, mail them, and resolve questions. If the Baltimore facility’s costs approximate this rate, it incurred an estimated $26,000 to collect $67,000 for OTC products in fiscal year 1995. In addition, about 25 percent of the medication copayments that were billed have gone unpaid and would have required additional costs to resolve. Collecting the copayment at the time a product is dispensed could eliminate most administrative costs and increase revenues. VA facilities could adopt less generous policies for OTC products, which would be more consistent with other health plans. This could be achieved by adopting such costs containment measures as (1) limiting OTC products available, (2) restricting veterans eligibility for OTC products, or (3) limiting quantities dispensed. As previously discussed, each hospital offers a unique assortment of OTC products. For example, the most generous OTC product benefit packages contain about 285 medications, 514 medical supplies, and 14 dietary supplements. By contrast, the least generous packages include about 124 medications, 114 medical supplies, and 4 dietary supplements. supplements, such as Ensure, multiple vitamins, and mineral supplements; and medical supplies, such as ostomy products and chemical test strips. As part of VA’s ongoing reorganization, the 22 network directors have developed an unduplicated inventory of OTC products dispensed by facilities operating in the network. In general, each network’s formulary more closely approximates the more generous OTC product benefit packages available in each network rather than the less generous package. Some network directors plan to review their formularies to identify products that could be removed. Recently, 58 facilities told us that they are considering removing some OTC products from their formularies. Most are examining fewer than 10 products, although the number of products under review ranges between 1 and 205. Products most commonly mentioned include dietary supplements, antacids, diapers, aspirin, and acetaminophen. Ninety facilities are not contemplating changes at this time. Interestingly, wide disagreement exists about VA’s provision of OTC products on an outpatient basis. For example, 22 facilities suggested that all OTC products should be eliminated. By contrast, 57 suggested that all OTC products should remain available. The other 70 facilities provided no opinion regarding whether OTC products should be kept or eliminated. Many facilities pointed out that eliminating all OTC products could result in greater costs for VA health care. This is because some OTC products are relatively cheap or they help prevent significant health problems that could be expensive for VA facilities to ultimately treat. Also, facilities said that physicians may substitute higher-cost prescription medications in place of certain OTC products that would no longer be available. Facilities reported 21 OTC products, which, if removed from their formularies, would result in greater costs to VA. Those most frequently mentioned were aspirin, acetaminophen, antacids, and insulin. These facilities also reported that 14 of the 21 products had prescription substitutes. These include aspirin, acetaminophen, and antacids (insulin has no prescription substitute). medications would result in a higher use of more expensive prescription medications, it had not found this to be true at its facility. As OTC products are removed from formularies, veterans will have to obtain the products elsewhere. To facilitate this, some VA facilities reported that they are using VA’s Canteen Service to provide OTC products that have been eliminated from their formularies. The Canteen Service operates stores in almost every VA facility to sell a variety of items, including some OTC products. For example, the Baltimore pharmacy has asked its Canteen Service store to stock about 13 OTC products that were recently eliminated from its formulary. The Baltimore pharmacy has already shifted most of its dispensing of dietary supplements to the store. VA Canteen Service stores do not use federal funds to operate and generally provide items at a discount, in large part because they do not have the expense of advertising. By allowing these stores to dispense OTC products, VA may reduce both dispensing and ingredient costs for its pharmacies. At the same time, VA’s Canteen Service stores can provide many veterans with a convenient and possibly less costly option for obtaining these products than would be available through other local outlets. The Congress could reduce the federal share of VA pharmacies’ costs for filling veterans’ OTC prescriptions by expanding copayment requirements. This could be achieved through (1) tightening exemption criteria, (2) requiring copayments for medical supplies, or (3) raising the copayment amount. Unlike VA, other health plans’ copayment requirements generally apply equally to all beneficiaries and for all covered products. As previously discussed, veterans’ copayments cover only 7 percent of the Baltimore pharmacy’s OTC costs. If the copayment remains at $2 for each 30-day supply, changes that expand the number of veterans required to make a copayment could increase veterans’ share up to 31 percent and thereby reduce the Baltimore pharmacy’s share to 69 percent. A copayment of about $9 would be needed to achieve a comparable sharing rate if existing exemptions are maintained. for service-connected conditions. In 1992, the Congress exempted veterans from the copayment requirement for nonservice-connected conditions if their income was below prescribed thresholds. Service-connected veterans received about one-third of the 116,000 prescriptions filled at the Baltimore pharmacy. Of these, almost one-half had ratings of 50 percent or higher. Veterans without service-connected conditions received the remaining two-thirds and about one-half of these veterans were exempt because of income below the statutory threshold. VA officials told us that while some low-income veterans may have difficulties with copayments, most veterans did not seem to have such a problem before the 1992 enactment of the low-income exemption. The Baltimore pharmacy could have recovered an additional 7 percent of its costs if all veterans without service-connected conditions were required to make copayments for OTC products; and an additional 11 percent of its costs if veterans were required to make copayments for OTC products provided for service-connected and nonservice-connected conditions. Last month, VA’s General Counsel recommended that VA facilities should use a more restrictive income threshold, as required by the 1992 low-income exemption. Earlier, we had informed VA’s Counsel that facilities were inappropriately using the higher aid-and-attendance pension rate rather than the lower regular pension rate. Using the lower rate should allow the Baltimore facility, as well as other facilities, to collect large amounts of copayments from veterans who would not otherwise have been charged. When the Congress established a copayment requirement for medications and dietary supplements in 1990, it did not include a requirement for medical supplies. VA officials told us that they know of no reason why medical supplies should be treated differently from other product categories in terms of copayments. Moreover, the legislative history of this 1990 action offers no explanation for why a copayment for medical supplies was not included. provided for longer-term conditions, including diabetic and ostomy supplies or diapers for those suffering from incontinence. We estimate that the Baltimore facility could have recovered an additional 6 percent of its OTC product costs in fiscal year 1995 if veterans were required to make copayments for medical supplies used to treat nonservice-connected conditions. The Baltimore facility would need to charge a higher copayment to recover a larger share of its OTC product costs, if the exemptions and collection rates remain unchanged. For example, recoveries could be raised from 7 percent to 32 percent if the legislatively established copayment amount were $9 for a 30-day supply. However, if some changes are made to the exemptions, this target share could be achieved with a smaller increase in the copayment rate, as shown in table 2. Most VA facilities offer an OTC product benefits package that is more generous than other health plans. In addition, VA facilities provide other features, such as free OTC product mail service and deferred credit for copayments owed, that are not commonly available in other plans. As a result, VA facilities have devoted significant resources to the provision of OTC products, which other plans have elected not to spend. VA facilities could reduce their pharmacy costs if existing eligibility criteria are more strictly administered for OTC products. Less than half of the veterans receiving outpatient care have service-connected conditions. Thus, most veterans must meet the pre- and posthospitalization or obviating-the-need criterion. In our view, many veterans may be receiving OTC products for nonservice-connected conditions unrelated to a VA hospital stay or potential hospitalization. Toward this end, VA may need to provide better guidance to facilities to achieve an effective and consistent use of OTC products within its existing statutory authority. VA should be commended for instructing network directors to consolidate formularies. This action, which is currently in progress, has not yet achieved an adequate level of consistency or cost-containment systemwide because the networks current formularies approximate the more generous coverage of OTC products. Moreover, some networks are allowing facilities to have less generous coverage of OTC products than these networks’ formularies. This will likely maintain the uneven availability of OTC products. Given the disagreement among networks and facilities regarding the provision of OTC products, additional guidance may be needed to ensure that veterans have a consistent level of access to OTC products systemwide. In light of concerns about potential resource shortages at some facilities, tailoring the availability of OTC products to be more in line with those less generous facilities would seem desirable. This would essentially limit OTC products to those most directly related to VA hospitalizations or those considered most essential to obviate the need for hospitalization, such as insulin for diabetic veterans. VA facilities could also reduce their costs if they restructured OTC product dispensing and copayment collection processes. In general, most facilities handle OTC products too many times, mail products too often, and allow veterans to delay copayments too frequently. Although, some facilities have adopted measures to operate more efficiently, all facilities could benefit by doing so. changes could also act as important incentives for veterans to only obtain the OTC products from VA facilities that they expect to use. Finally, VA facilities have developed ways to provide OTC products to veterans outside their pharmacies at costs lower than they are available through other local outlets. Some facilities have had success using the Canteen Service stores to stock and sell OTC products that the facilities had removed from their formularies. This seems to provide a reasonable alternative to providing OTC products to veterans through VA pharmacies. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or other Members may have. For more information, please call Paul Reynolds, Assistant Director, at (202) 512-7109. Walter Gembacz, Mike O’Dell, Mark Trapani, Paul Wright, Deena El-Attar, and Joan Vogel also contributed to the preparation of this statement. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the Department of Veterans Affairs (VA) policies concerning over-the-counter (OTC) medications, medical supplies, and dietary supplements. GAO noted that: (1) VA pharmacies provide many OTC medications; (2) VA pharmacies dispensed analgesics nearly 3 million times in 1995; (3) some VA pharmacies restrict certain veterans from receiving OTC products, as well as the quantity that they can receive; (4) one-third of VA facilities reduced the number of OTC medications available; (5) network directors are working to achieve a level of consistency and cost-containment for VA facilities within their network; (6) non-VA health plans make OTC products available to all beneficiaries on a uniform basis; (7) these plans' coverage of OTC products are more restrictive than most VA facilities coverage; (8) VA pharmacies dispensed more than 15 million OTC products last year, at an estimated cost of $165 million; (9) VA recovered 4 percent of its total dispensing costs through veterans' copayments; (10) veterans medication costs depended on the type of product and their eligibility status; and (11) VA can reduce the resources it devotes to OTC medications by adhering to statutory eligibility rules, dispensing OTC products more efficiently, collecting copayments, reducing the number of OTC products available to outpatients, and expanding copayment requirements.
|
Statistics reported by CDC show that many high school students engage in sexual behavior that places them at risk for unintended pregnancy and STDs. In 2005, 46.8 percent of high school students reported that they have had sexual intercourse, with 14.3 percent of students reporting that they had had sexual intercourse with four or more persons. CDC also has reported that the prevalence of certain STDs—including the rate of chlamydia infection, the most frequently reported STD in the United States—peaks in adolescence and young adulthood. At the time of our 2006 report, HHS’s strategic plan included the objectives to reduce the incidence of STDs and unintended pregnancies and to promote family formation and healthy marriages. These two objectives supported HHS’s goals to reduce the major threats to the health and well- being of Americans and to improve the stability and healthy development of American children and youth. Abstinence-until-marriage education programs were one of several types of programs that supported these objectives. The State Program, the Community-Based Program, and the AFL Program provide grants to support the recipients’ own efforts to provide abstinence-until-marriage education at the local level. These programs must comply with the statutory definition of abstinence education (see table 1). The State Program, administered by ACF, provides funding to its grantees—states—for the provision of abstinence-until-marriage education to those most likely to have children outside of marriage. States that receive grants through the State Program have discretion in how they use their funding to provide abstinence-until-marriage education. Funds are allotted to each state that submits the required annual application based on the ratio of the number of low-income children in the state to the total number of low-income children in all states. States are required to match every $4 they receive in federal money with $3 of nonfederal money and are required to report annually on the performance of the abstinence-until- marriage education programs that they support or administer. In fiscal year 2007, 40 states, the District of Columbia, and 3 insular areas were awarded funding. The Community-Based Program, which is also administered by ACF, is focused on funding public and private entities that provide abstinence- until-marriage education for adolescents from 12 to 18 years old. The Community-Based Program provides grants for school-based programs, adult and peer mentoring, and parent education groups. For fiscal year 2007, 59 grants were awarded to organizations and other entities. Grantees are required to report to ACF, on a semiannual basis, on the performance of their programs. The AFL Program also supports programs that provide abstinence-until- marriage education. Under the AFL Program, OPA awards competitive grants to public or private nonprofit organizations or agencies, including community-based and faith-based organizations, to facilitate abstinence- until-marriage education in a variety of settings, including schools and community centers. In fiscal year 2007, OPA awarded funding to 36 grantees. Grantees are required to conduct evaluations of certain aspects of their programs and report annually on their performance. Five organizational units located within HHS—ACF, OPA, CDC, ASPE, and NIH—have responsibilities related to abstinence-until-marriage education. ACF and OPA administer the three main federal abstinence-until-marriage education programs. CDC supports abstinence-until-marriage education at the national, state, and local levels. CDC, ASPE, and NIH are sponsoring research on the effectiveness of abstinence-until-marriage programs. In October 2006 we reported that efforts by HHS and states to assess the scientific accuracy of materials used in abstinence-until-marriage education programs had been limited. ACF—whose grants to the State and Community-Based Programs accounted for the largest portion of federal spending on abstinence-until-marriage education—did not review its grantees’ education materials for scientific accuracy and did not require grantees of either program to review their own materials for scientific accuracy. In addition, not all states funded through the State Program chose to review their program materials for scientific accuracy. In contrast to ACF, OPA reviewed the scientific accuracy of grantees’ proposed educational materials and corrected inaccuracies in these materials. As of October 2006, there had been limited efforts to review the scientific accuracy of educational materials used in ACF’s State and Community- Based Programs—the two programs that accounted for the largest portion of federal spending on abstinence-until-marriage education. ACF did not review materials for scientific accuracy in either reviewing grant applications or in overseeing grantees’ performance. Prior to fiscal year 2006, State Program and Community-Based Program applicants were not required to submit copies of their proposed educational materials with their applications. While ACF required grantees of the Community-Based Program—but not the State Program—to submit their educational materials with their fiscal year 2006 applications, ACF officials told us that grantee applications and materials were only reviewed to ensure that they addressed all aspects of the scope of the Community-Based Program, such as the A-H definition of abstinence education. Further, documents provided to us by ACF indicated that the agency did not review grantees’ educational materials for scientific accuracy as a routine part of its oversight activities. In addition, ACF also did not require its grantees to review their own materials for scientific accuracy. While not all grantees of the State Program had chosen to review the scientific accuracy of their educational materials, officials from 5 of the 10 states in our review reported that their states chose to do so. These five states used a variety of approaches in their reviews. For example, some states contracted with medical professionals—such as nurses, gynecologists, and pediatricians—to serve as medical advisors who review program materials and use their expertise to determine what is and is not scientifically accurate. One of the states required that all statistics or scientific statements cited in a program’s materials be sourced to CDC or a peer-reviewed medical journal. Officials from this state told us that if statements in these materials could not be attributed to these sources, the statements were required to be removed until citations were provided and materials were approved. As a result of their reviews, officials from two of the five states reported that they had found inaccuracies. One state official cited an instance where materials incorrectly suggested that HIV can pass through condoms because the latex used in condoms is porous. State officials who have identified inaccuracies told us that they informed their grantees of inaccuracies so that they could make corrections in their individual programs. Some of the educational materials that states reviewed were materials that were commonly used in the Community–Based Program. While there had been limited review of materials used in the State and Community-Based Programs, grantees of these programs had received some technical assistance designed to improve the scientific accuracy of their materials. For example, ACF officials reported that the agency provided a conference for grantees of the Community-Based Program in February 2006 that included a presentation focused on medical accuracy. As of 2006, in contrast to ACF, OPA reviewed for scientific accuracy the educational materials used by AFL Program grantees, and it did so before those materials were used. OPA officials said that after grants were awarded, a medical education specialist (in consultation with several part- time medical experts) reviewed the grantees’ printed materials and other educational media, such as videos. OPA officials explained that the medical education specialist must approve all proposed materials before they are used. On many occasions, OPA grantees had proposed using— and therefore OPA has reviewed—materials commonly used in the Community-Based Program. For example, an OPA official told us that the agency had reviewed three of the Community-Based Program’s commonly used curricula and was also currently reviewing another curriculum commonly used by Community-Based Program grantees. OPA officials stated that the medical education specialist had occasionally found and addressed inaccuracies in grantees’ proposed educational materials. OPA officials stated that these inaccuracies were often the result of information being out of date because, for example, medical and statistical information on STDs changes frequently. OPA addressed these inaccuracies by either not approving the materials in which they appeared or correcting the materials through discussions with the grantees and, in some cases, the authors of the materials. In fiscal year 2005, OPA disapproved of a grantee using a specific pamphlet about STDs because the pamphlet contained statements about STD prevention and HIV transmission that were considered incomplete or inaccurate. For example, the pamphlet stated that there was no cure for hepatitis B, but the medical education specialist required the grantee to add that there was a preventive vaccine for hepatitis B. In addition, OPA required that a grantee correct several statements in a true/false quiz—including statements about STDs and condom use—in order for the quiz to be approved for use. For example, the medical education specialist changed a sentence from “The only 100% effective way of avoiding STDs or unwanted pregnancies is to not have sexual intercourse.” to “The only 100% effective way of avoiding STDs or unwanted pregnancies is to not have sexual intercourse and engage in other risky behaviors.” While OPA and some states had reviewed their grantees’ abstinence-until- marriage education materials for scientific accuracy, these types of reviews have the potential to affect abstinence-until-marriage education providers more broadly, perhaps creating an incentive for the authors of such materials to ensure they are accurate. As of October 2006, the company that produced one of the most widely used curricula used by grantees of the Community-Based Program had updated its curriculum. A representative from that company stated that this had been done, in part, in response to a congressional review that found inaccuracies in its abstinence-until-marriage materials. To address concerns about the scientific accuracy of materials used in abstinence-until-marriage education programs, we recommended that the Secretary of HHS develop procedures to help assure the accuracy of such materials used in the State and Community-Based Programs. We recommended that in order to provide such assurance, the Secretary could consider alternatives such as (1) extending the approach currently used by OPA to review the scientific accuracy of the factual statements included in abstinence-until-marriage education to materials used by grantees of ACF’s Community-Based Program and requiring grantees of ACF’s State Program to conduct such reviews or (2) requiring grantees of both programs to sign written assurances in their grant applications that the materials they propose using are accurate. In its written comments on a draft of our report, HHS stated that it would consider requiring grantees of both ACF programs to sign such written assurances to the accuracy of their materials. In April 2008, an ACF official reported that, in response to our recommendation, ACF began requiring in fiscal year 2007 that community-based grantees sign written assurances that the materials they propose using are accurate. This official also reported that, starting in fiscal year 2008, grantees of the State Program will also be required to sign these written assurances. In addition, this official reported that ACF is implementing a process to review the accuracy of the proposed curricula of fiscal year 2007 Community-based grantees. The ACF official reported that the curricula will be reviewed by a research analyst to ensure that all statements are referenced to source documents, and then by a healthcare professional who will compare the information in the curricula to information in the source documents. The official also reported that, in the future, ACF will require states to provide the agency with descriptions of their strategies for reviewing the accuracy of their abstinence-until- marriage education programs. HHS, states, and researchers have made a variety of efforts to assess the effectiveness of abstinence-until-marriage education programs; however, a number of factors limit the conclusions that can be drawn. ACF and OPA have required their grantees to report on various outcomes used to measure the effectiveness of grantees’ abstinence-until-marriage education programs. To assess the effectiveness of the State and Community-Based Programs, ACF has analyzed national data on adolescent birth rates and the proportion of adolescents who report having had sexual intercourse. As of October 2006, other organizational units within HHS were funding studies designed to assess the effectiveness of abstinence-until-marriage education programs in delaying sexual initiation, reducing pregnancy and STD rates, and reducing the frequency of sexual activity. Despite these efforts, several factors limit the conclusions that can be drawn about the effectiveness of abstinence-until-marriage education programs. Most of the efforts to evaluate the effectiveness of abstinence-until-marriage education programs that we reviewed have not met certain minimum criteria that experts have concluded are necessary in order for assessments of program effectiveness to be scientifically valid, in part because such designs can be expensive and time-consuming to carry out. In addition, the results of some efforts that meet the criteria of a scientifically valid assessment have varied. ACF has made efforts to assess the effectiveness of abstinence-until- marriage education programs funded by the State Program and the Community-Based Program. One of ACF’s efforts has been to require grantees of both programs to report data on outcomes, though the two programs have different requirements for the outcomes grantees must report. As of fiscal year 2006, State Program grantees were required to report annually on four measures of the prevalence of adolescent sexual behavior in their states, such as the rate of pregnancy among adolescents aged 15 to 17 years, and compare these data to program targets over 5 years. States also were required to develop and report on two additional performance measures that were related to the goals of their programs. Also as of fiscal year 2006, ACF required Community-Based Program grantees to develop and report on outcome measures designed to demonstrate the extent to which grantees’ community-based abstinence- until-marriage education programs were accomplishing their program goals. In addition to outcome reporting, ACF required grantees of the Community-Based Program to report on program “outputs,” which measure the quantity of program activities and other deliverables, such as the number of participants who are served by the abstinence-until- marriage education programs. As of October 2006, OPA also had made efforts to assess the effectiveness of the AFL Program. Specifically, OPA required grantees of the AFL Program to develop and report on outcome measures, such as participants’ knowledge of the benefits of abstinence and their reported intentions to abstain from sexual activity, that were used to help demonstrate the extent to which grantees’ programs were having an effect on program participants. To collect data on outcome measures, OPA required grantees to administer, at a minimum, a standardized questionnaire to their program participants, both when participants begin an abstinence-only education program and after the program’s completion. OPA officials told us that they were planning to aggregate information from certain questions in the standardized set of questionnaires in order to report on certain performance measures as part of the agency’s annual performance reports; the agency expected to begin receiving data from grantees that were using these questionnaires in January 2007. To help grantees measure the effectiveness of their programs, both ACF and OPA required that grantees use independent evaluators and have provided assistance to grantees in support of their program evaluation efforts. ACF and OPA required their grantees to contract with third-party evaluators, such as university researchers or private research firms, who were responsible for helping grantees develop the outcome measures they were required to report on and monitoring grantee performance against those measures. Unlike ACF, OPA required that these third-party evaluations incorporate specific methodological characteristics, such as control groups of individuals that did not receive the program and sufficient sample sizes to ensure that any observed differences between the groups were statistically valid. Both ACF and OPA have provided technical assistance and training to their grantees in order to support grantees’ own program evaluation efforts. ACF also analyzed trends in adolescent behavior, as reflected in national data on birth rates among teens and the proportion of surveyed high school students reporting that they have had sexual intercourse. ACF used these national data as a measure of the overall effectiveness of its State and Community-Based Programs, comparing the national data to program targets. In its annual performance reports, the agency has summarized the progress being made toward lowering the rate of births to unmarried teenage girls and the proportion of students (grades 9-12) who report having ever had sexual intercourse. Some states have made additional efforts to assess the effectiveness of abstinence-until-marriage education programs. Specifically, we found that 6 of the 10 states in our review that received funding through ACF’s State Program had made efforts to conduct evaluations of selected abstinence- until-marriage programs in their state. All 6 of the states worked with third-party evaluators, such as university researchers or private research firms, to perform the evaluations, which in general measured self-reported changes in program participants’ behavior and attitudes related to sex and abstinence as indicators of program effectiveness. Four of these states had completed third-party evaluations as of February 2006, and the results of these studies varied. Among those 4 states, 3 states required the abstinence programs in their state to measure reported changes in participants’ behavior as an indicator of program effectiveness—both at the start of the program and after its completion. The 3 states required their programs to track participants’ reported incidence of sexual intercourse. Additionally, 2 of the 4 states required their programs to track biological outcomes, such as pregnancies, births, or STDs. In addition, 6 of the 10 states in our review required their programs to track participants’ attitudes about abstinence and sex, such as the number of participants who make pledges to remain abstinent. Besides ACF and OPA, other organizational units within HHS have made efforts to assess the effectiveness of abstinence-until-marriage education programs. As of 2006, ASPE was sponsoring a study of the Community- Based Program and a study of the State Program. The study of the State Program was conducted by Mathematica Policy Research, Inc. (Mathematica) and completed in 2007. It examined the impact of five programs funded through the State Program on participants’ attitudes and behaviors related to abstinence and sex. Like ASPE, CDC has made its own effort to assess the effectiveness of abstinence-until-marriage education by sponsoring a study to evaluate the effectiveness of two middle school curricula—one that complies with abstinence-until- marriage education program requirements and one that teaches a combination of abstinence and contraceptive information and skills. The agency expects to complete the study in 2009. Likewise, NIH has funded studies comparing the effectiveness of education programs that focus only on abstinence with the effectiveness of sex education programs that teach both abstinence and information about contraception. As of October 2006, NIH was funding five studies, which in general were comparing the effects of these two types of programs on the sexual behavior and related attitudes among groups of either middle school or high school students. In addition to the efforts of researchers working on behalf of HHS and states, other researchers—such as those affiliated with universities and various advocacy groups—have made efforts to study the effectiveness of abstinence-until-marriage education programs. This work includes studies of the outcomes of individual programs and reviews of other studies on the effectiveness of individual abstinence-until-marriage education programs. In general, research studies on the effectiveness of individual programs have examined the extent to which they changed participants’ demonstrated knowledge of concepts taught in the programs, declared intentions to abstain from sex until marriage, and reported behavior related to sexual activity and abstinence. As of October 2006, the efforts to study and build a body of research on the effectiveness of most abstinence-until-marriage education programs had been under way for only a few years, in part because grants under the two programs that account for the largest portion of federal spending on abstinence-until- marriage education—the State Program and the Community-Based Program—were not awarded until 1998 and 2001, respectively. Most of the efforts of HHS, states, and other researchers to evaluate the effectiveness of abstinence-until-marriage education programs included in our review have not met certain minimum criteria that experts have concluded are necessary in order for assessments of program effectiveness to be scientifically valid. In an effort to better assess the merits of the studies that have been conducted on the effectiveness of sexual health programs—including abstinence-until-marriage education programs—scientific experts have developed criteria that can be used to gauge the scientific rigor of these evaluations. The reports of two panels of experts, as well as the experts we interviewed in the course of our previous work, generally agreed that scientifically valid studies of a program’s effectiveness should include the following characteristics: An experimental design that randomly assigns individuals or schools to either an intervention group or control group, or a quasi-experimental design that uses nonrandomly assigned but well-matched comparison groups. According to the panel of scientific experts convened by the National Campaign to Prevent Teen Pregnancy, experimental designs or quasi-experimental designs with well-matched comparison groups have at least three important strengths that are typically not found in other studies, such as those that use aggregated data: they evaluate specific programs with known characteristics, they can clearly distinguish between participants who did and did not receive an intervention, and they control for other factors that may affect study outcomes. According to scientific experts, studies that include experimental or quasi-experimental designs should also collect follow-up data for a minimum number of months after subjects receive an intervention. In addition, experts have reported that studies should have a sample size of at least 100 individuals for study results to be considered scientifically valid. Studies should assess or measure changes in biological outcomes or reported behaviors instead of attitudes or intentions. According to scientific experts, biological outcomes—such as pregnancy rates, birth rates, and STD rates—and reported behaviors—such as reported initiation and frequency of sexual activity—are better measures of the effectiveness of abstinence-until-marriage programs, because adolescent attitudes and intentions may or may not be indicative of actual behavior. Many of the efforts by HHS, states, and other researchers that we identified in our review lack at least one of the characteristics of a scientifically valid study of program effectiveness. Most of the efforts to assess the effectiveness of these programs have not used experimental or quasi-experimental designs with sufficient follow-up periods and sample sizes. For example, ACF used, according to ACF officials, grantee reporting on outcomes in order to monitor grantees’ performance, target training and technical assistance, and help grantees improve service delivery. However, because the outcomes reported by grantees have not been produced through experimentally or quasi-experimentally designed studies, such information cannot be causally attributed to any particular abstinence-until-marriage education program. Further, none of the state evaluations we reviewed that had been completed included randomly assigned control groups. Similarly, some of the journal articles that we reviewed described studies to assess the effectiveness of abstinence-until- marriage programs that also lacked at least one of the characteristics of a scientifically valid study of program effectiveness. In these studies, researchers administered questionnaires to study participants before and after they completed an abstinence-until-marriage education program and assessed the extent to which the responses of participants changed. These studies did not compare the responses of study participants with a group that did not participate in an abstinence-until-marriage education program. Like the lack of an experimental or quasi-experimental design, not measuring changes in behavioral or biological outcomes among participants limits the conclusions that can be drawn about the effectiveness of abstinence-until-marriage education programs. Most of the efforts we identified in our review used reported intentions and attitudes in order to assess the effectiveness of abstinence-until-marriage programs. For example, as of 2006, neither ACF’s community-based grantees nor OPA’s AFL grantees were required to report on behavioral or biological outcomes, such as rates of intercourse or pregnancy. Similarly, the journal articles we reviewed were more likely to use reported attitudes and intentions—such as study participants’ reported attitudes about premarital sexual activity or their reported intentions to remain abstinent until marriage—rather than their reported behaviors or biological outcomes to assess the effectiveness of abstinence-until-marriage programs. According to scientific experts, HHS, states, and other researchers face a number of challenges in applying either of these criteria. According to these experts, experimental or quasi-experimental studies can be expensive and time-consuming to carry out, and many grantees of abstinence-until-marriage education programs have insufficient time and funding to support these types of studies. Moreover, it can be difficult for researchers assessing abstinence-until-marriage education programs to convince school districts to participate in randomized intervention and control groups, in part because of sensitivities to surveying attitudes, intentions, and behaviors related to abstinence and sex. Similarly, experts, as well as state and HHS officials, have reported that it can be difficult to obtain scientifically valid information on biological outcomes and sexual behaviors. For example, experts have reported that when measuring a program’s effect on biological outcomes—such as reducing pregnancy rates or birth rates—it is necessary to have large sample sizes in order to determine whether a small change in such outcomes is the result of an abstinence-until-marriage education program. Among the assessment efforts we identified are some studies that meet the criteria of a scientifically valid effectiveness study. However, results of these studies varied, and this limits the conclusions that can be drawn about the effectiveness of abstinence-until-marriage education programs. Some researchers have reported that abstinence-until-marriage education programs have resulted in adolescents reporting having less frequent sexual intercourse or fewer sexual partners. For example, in one study of middle school students, participants in an abstinence-until-marriage education program who had sexual intercourse during the follow-up period were 50 percent less likely to report having two or more sexual partners when compared with their nonparticipant peers. In contrast, other studies have reported that abstinence-until-marriage education programs did not affect the reported frequency of sexual intercourse or number of sexual partners. For example, one study of middle school students found that participants of an abstinence-until-marriage program were not less likely than nonparticipants at the 1 year follow-up to report less frequent sexual intercourse or fewer sexual partners. Experts with whom we spoke emphasized that there were still too few scientifically valid studies completed as of 2006 that could be used to determine conclusively which, if any, abstinence-until-marriage programs are effective. We identified two key studies that experts anticipated would meet the criteria of a scientifically valid effectiveness study. Experts and federal officials we interviewed stated that they expected the results of these two federally funded studies to add substantively to the body of research on the effectiveness of abstinence-until-marriage education programs. One of these key studies—the final Mathematica report, contracted by ASPE, on the State Program—has been completed. In this report, the researchers found that youth who participated in the abstinence-until-marriage education programs were no more likely than control group youth to have abstained from sex, and among those who reported having had sex, they had similar numbers of sexual partners and had initiated sex at the same average age. The youth in abstinence-until-marriage education programs also were no more likely to have engaged in unprotected sex than control group youth. The second key study we identified is CDC’s research on middle school programs, which is still ongoing. In addition, since October 2006, a third key report was released, presenting the 2007 analysis of the National Campaign to Prevent Teen and Unplanned Pregnancy of the available research on abstinence-until-marriage education programs. This report stated that studies of abstinence programs have not produced sufficient evidence of effectiveness, and that efforts should be directed toward further evaluation of these programs. During the course of our work on abstinence-until-marriage education, we identified a federal statutory provision—section 317P(c)(2) of the Public Health Service Act—relevant to the grants provided by HHS’s State Program, Community-Based Program, and AFL Program. This provision requires that educational materials prepared by HHS’s grantees, among others, that are specifically designed to address STDs, contain medically accurate information regarding the effectiveness or lack of effectiveness of condoms in preventing the diseases the materials are designed to address. At the time of our review, an ACF official reported that materials prepared by abstinence-until-marriage education grantees were not subject to section 317P(c)(2). However, we concluded that this requirement would apply to abstinence-until-marriage education materials prepared by and used by federal grant recipients, depending upon the substantive content of those materials. In other words, in materials specifically designed to address STDs, HHS’s grantees are required to include information on condom effectiveness, and that information must be medically accurate. Therefore, we recommended in a letter dated October 18, 2006, that HHS reexamine its position and adopt measures to ensure that, where applicable, abstinence education materials comply with this requirement. In a letter to us dated January 16, 2007, ACF responded that it would take steps to “make it clear to grantees that when they mass produce materials that as a primary purpose are specifically about STDs those materials are required by section 317P(c)(2) of the Public Health Service Act to contain medically accurate information regarding the effectiveness or lack of effectiveness of condoms in preventing the sexually transmitted disease the materials are designed to address.” The fiscal year 2007 Community- Based Program announcement states that mass produced materials that as their primary purpose are specifically about STDs are subject to this requirement. The announcement also states that mass produced materials are considered to be specifically designed to address STDs if more than 50 percent of the content is related to STDs. An ACF official also told us that future State and Community-Based Program announcements would include this language. Mr. Chairman, this completes my prepared remarks. I will be happy to answer questions you or other Committee Members may have. For further information regarding this testimony, please contact Marcia Crosse at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Major contributors to this report were Kristi Peterson, Assistant Director; Kelly DeMots; Cathleen Hamann; Helen Desaulniers; and Julian Klazkin. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Among the efforts of the Department of Health and Human Services (HHS) to reduce the incidence of sexually transmitted diseases and unintended pregnancies, the agency provides funding to states and organizations that offer abstinence-until-marriage education. GAO was asked to testify on the oversight of federally funded abstinence-until-marriage education programs. This testimony is primarily based on Abstinence Education: Efforts to Assess the Accuracy and Effectiveness of Federally Funded Programs, GAO-07-87 (Oct. 3, 2006). In this testimony, GAO discusses efforts by (1) HHS and states to assess the scientific accuracy of materials used in abstinence-until-marriage education programs and (2) HHS, states, and researchers to assess the effectiveness of abstinence-until-marriage education programs. GAO also discusses a Public Health Service Act requirement regarding medically accurate information about condom effectiveness. GAO focused on the three main federally funded abstinence-until-marriage programs and reviewed documents and interviewed HHS officials in the Administration for Children and Families (ACF) and the Office of Population Affairs (OPA). To update certain information, GAO contacted officials from ACF and OPA. Efforts by HHS and states to assess the scientific accuracy of materials used in abstinence-until-marriage education programs have been limited. As of October 2006, HHS's ACF--which awards grants under two programs that account for the largest portion of federal spending on abstinence education--did not review its grantees' education materials for scientific accuracy, nor did it require grantees of either program to do so. Not all states that receive funding from ACF had chosen to review their program materials for scientific accuracy. OPA reviewed the scientific accuracy of grantees' proposed education materials, and any inaccuracies found had to be corrected before those materials could be used. The extent to which federally funded abstinence-until-marriage education materials are inaccurate was not known, but OPA and some states reported finding inaccuracies. GAO recommended that the Secretary of HHS develop procedures to help assure the accuracy of abstinence-until-marriage education materials. An ACF official reported that ACF is currently implementing a process to review the accuracy of Community-based grantees' curricula and has required those grantees to sign assurances that the materials they propose using are accurate. The official also reported that, in the future, state grantees will have to provide ACF with descriptions of their strategies for reviewing the accuracy of their programs. As of August 2006, HHS, states, and researchers had made a variety of efforts to assess the effectiveness of abstinence-until-marriage education programs, but a number of factors limit the conclusions that can be drawn about the programs' effectiveness. ACF and OPA have required their grantees to report on various outcomes used to measure program effectiveness. To assess the effectiveness of its grantees' programs, ACF has analyzed national data on adolescent birth rates and the proportion of adolescents who report having had sexual intercourse. Additionally, 6 of the 10 states in GAO's review worked with third-party evaluators to assess the effectiveness of abstinence-until-marriage programs in their states. However, the conclusions that can be drawn are limited because most of the efforts to evaluate program effectiveness have not met certain minimum criteria that experts have concluded are necessary for such assessments to be scientifically valid. Additionally, the results of some efforts that do meet such criteria have varied. While conducting work for its October 2006 report, GAO identified a legal matter that required the attention of HHS. Section 317P(c)(2) of the Public Health Service Act requires certain educational materials to contain medically accurate information about condom effectiveness. GAO concluded that this requirement would apply to abstinence education materials prepared and used by federal grant recipients, depending on their substantive content, and recommended that HHS adopt measures to ensure that, where applicable, abstinence education materials comply with this requirement. The fiscal year 2007 program announcement for the Community-based Program provides information about the applicability of this requirement, and future State and Community-based Program announcements are to include this information.
|
Air ambulances can play an important role in transporting patients with time critical injuries and conditions to medical facilities and providing patients with advanced care while en route. Air ambulances transported more than 270,000 patients in 2008, and their use is widely believed to improve the chances of survival for trauma victims and other critical patients. Composing more than 80 percent of air ambulance aircraft, helicopter air ambulances transport patients from the scene of an accident to a hospital or perform short-distance interhospital patient transfers. Because fixed-wing aircraft only fly between airports, they are not typically used to transport injured patients from an accident scene. Patients are transported by ground to and from the airport. Fixed-wing air ambulances generally perform more long-distance interhospital transports, often moving patients from a hospital to a distant specialized facility. Just over half of air ambulance transports are for moving patients between hospital facilities, one-third are for transporting victims from the accident scene to a hospital, and the remainder are for other purposes, such as organ transports or specialty care flights such as for pediatric and neonatal patients. Most air ambulances carry a pilot and a two-person medical crew. The medical crew may include a physician, nurse, paramedic, emergency medical technician, or other medical personnel. According to AAMS, the typical medical crew includes a critical care nurse and a paramedic. A critical care nurse has specialized training in responding to life-threatening health problems, such as those faced by many patients who are transported on air ambulances. Paramedics represent the highest licensure level of prehospital emergency care in most states, as they have enhanced skills and can administer a range of medications and interventions. Other caregivers and physicians may be added to a medical crew if the patient’s condition necessitates further care. In the air ambulance industry, the business model is generally defined by the entity that owns or contracts for the aviation and medical services that are provided. Air ambulance providers generally use one of the following three business models. Hospital-based: a hospital generally controls the business by providing medical services and staff while usually contracting out for the aviation component, including the pilots, mechanics, and aircraft. Independent: operations are not controlled or run by a specific medical facility. Independent providers may directly employ, or can contract for, the medical and flight crews to provide air ambulance services. Government operator: a state or local government or military unit owns and operates the air ambulances. However, a large number of variations exist within these structures. Some providers have adopted a “hybrid” model or have established joint ventures with hospitals. Air ambulance companies receive payment for transports from several sources, including private health insurance, government programs such as Medicare and Medicaid, and the patient. While industry revenue and payment data are not widely available, we obtained data on the percentage of total income that four air ambulance providers receive from each source. (See fig. 1.) For these four companies, private insurance companies or Medicare paid for most of the transport costs. A relatively small percentage of the costs were paid for by the patient themselves. From 2002 through 2006, the Centers for Medicare and Medicaid Services, the agency within the Department of Health and Human Services that administers Medicare and Medicaid, phased in a national fee schedule for air ambulance providers as a part of a series of Medicare payment reforms that Congress mandated in 1997. The national fee schedule redistributed, on a budget-neutral bases, payments among various types of ambulance services. Prior to 2002, Medicare reimbursement differed depending on the air ambulance provider’s business model: hospital-based providers were reimbursed based on reasonable costs, while independent providers were reimbursed based on reasonable charges. This policy contributed to wide variation in the reimbursement rate for the same service, with hospital- based providers generally receiving higher reimbursement than independent providers for similar services. The new national fee schedule established one payment rate for fixed-wing transports and another rate for helicopter transports. The fee schedule also provides higher reimbursement for transports in rural areas, but it does not differentiate payments according to the business model followed, the size of the aircraft used, or the level of medical or safety equipment on board. In addition to the revenue they receive from transports, air ambulance providers may receive or generate income for their operations from other sources. For example, hospital-based providers may receive funding from the hospital, and some independent air ambulance providers have established membership programs that generate income from annual fees. Government operators may receive funding through taxes or surcharges. For example, Maryland’s government-operated air ambulance service receives funding through a surcharge on state motor vehicle registrations. From 1999 through 2008, the number of patients transported by helicopter air ambulances increased from just over 200,000 to over 270,000, or about 35 percent, and the number of air ambulance helicopters increased from 360 to 677, or by about 88 percent. The data also show that between 2007 and 2008 there were an increasing number of helicopter air ambulances and a decreasing number of transports. (See fig. 2.) We were unable to determine whether the downward movement in 2008 represents a trend because 2009 data on patients transported were not available. The number of air ambulance helicopters varies widely by state. (See fig. 3.) Most states have multiple helicopters based in their state. Vermont and Rhode Island have none, but their air transport needs are served by providers in bordering states. Since 1999, the structure of the air ambulance industry has also changed. In the past, most air ambulance providers were hospital-based, whereas today, about half the providers are independent, with no support from hospitals in terms of ownership, risk, and financial support. According to industry stakeholders, a variety of factors contributed to the industry’s growth and structural change. The downsizing or closing of some community hospitals, according to stakeholders, resulted in longer transports to get some patients to hospitals, making it more advantageous to use air ambulances that could transport patients over longer distances more quickly than by ground ambulances. Similarly, the establishment of regional medical facilities, such as cardiac and stroke centers that provide highly specialized care for critically ill patients, encouraged the use of air ambulances, again because they could transport patients more quickly from outlying areas. Finally, implementation of the Medicare fee schedule provided those wishing to provide air ambulance services a degree of predictability for Medicare reimbursement, which stakeholders noted enabled air ambulance providers to develop more accurate financial plans. The growth in the number of helicopters and their movement into communities have generally made them more available to those in need. According to some stakeholders, having multiple air ambulances in an area increases the industry’s capacity to meet regional needs. For example, if one helicopter is unavailable because it is undergoing scheduled maintenance or responding to an air medical transport request, another helicopter in the same region is more likely to be available. Additionally, with more air ambulances available in rural communities, rural ground ambulances may be involved less frequently in transporting patients over long distances, and rural communities are less likely to be left without an ambulance or EMS crew. Providers also relocated air ambulance bases, moving them from hospitals into surrounding communities and thereby extending their availability. (See fig. 4.) A 2005 nationwide study of access to trauma centers in the United States found that 84 percent of the population had access to a Level I or II trauma center within 60 minutes. Of that population, almost 28 percent could only access those trauma centers in an hour or less because they were located within the coverage of an air ambulance. Stakeholders concerned with the growth in the industry, noted that the increase in the number of helicopters has been focused in areas that already have multiple air ambulance services while rural areas remain underserved. They said that ensuring the availability of air ambulance services in rural areas is problematic because covering a large, sparsely populated geographic area affects profitability and impacts companies’ ability to provide services in these areas. Stakeholders concerned with industry growth believe that uncontrolled growth of air ambulances in a region leads to medically unnecessary use— that is, when an air ambulance is dispatched for a patient whose injury or illness is not severe enough for the patient to need air transport. One stakeholder group compared data on the severity of patients’ injuries and discharge rates, developed by Arizona’s Department of Health Services, with similar data for a Level I Trauma Center in New Hampshire, and an air ambulance service in Boston. According to their analysis, the injuries of patients transported in Arizona, a state with a comparatively large number of helicopters, were less severe than those of patients transported in the two other states that have fewer helicopters. However, the comparison does not examine other factors involved in decisions about how to transport patients, including transport distances, who makes the transport decision, and what protocols guide the decision maker. Additionally, the decision to request an air ambulance is generally made by the attending physician at a hospital or by first responders at an accident scene. Concerns about medically unnecessary use of air ambulances have existed since the early 1980s. We identified 32 studies examining triage criteria using data collected from as early as 1975 to as recently as 2008. Fifteen study authors conclude that further measurement indices are needed to better identify over- and undertriage of patients transported by air ambulance. Because triage protocols and patterns of air ambulance utilization have changed considerably in the past 30 years, early reports must be interpreted with caution and their relevance to current triage protocols and air ambulance is unclear. It is also important to consider these studies in their historical context. Numerous guidelines on appropriate use of ambulances have been published. In 2006, the American College of Emergency Physicians and the National Association of EMS Physicians (NAEMSP) issued Guidelines for Air Medical Dispatch that built upon earlier guidelines published by NAEMSP, AAMS, and the American Academy of Pediatrics. The 2006 position statement recognized the continuing debate surrounding air medical transport and noted that research regarding the appropriate deployment of complex medical care systems was in its infancy. Furthermore, the position statement noted that many EMS systems have their own criteria for air medical dispatch, which usually differ between regions based on demographic, geographic, and health care resource considerations. Work on developing national guidelines is under way. After its February 2009 air ambulance safety public hearing, NTSB recommended that the Federal Interagency Committee on Emergency Medical Services develop national guidelines for selecting the most appropriate emergency transportation mode for urgent care. In response, the committee has begun to develop guidelines for the emergency transport of trauma victims from the scene of injury. These guidelines may eventually include recommendations for the transport of patients with other medical emergencies and for interfacility transports. Proponents of increasing state regulatory authority argue that having multiple providers in the same area creates pressure to fly that can lead to a number of unsafe practices. They maintain that providers’ high fixed costs create economic pressure to fly, and the concentration of many air ambulances in a geographic area further exacerbates this pressure. Air ambulance providers’ fixed costs can amount for up to 80 percent of a provider’s total costs. The air ambulance itself can cost from $600,000 to $12 million when outfitted with varying levels of flight and medical equipment. Participants at NTSB’s February 2009 public hearing discussed potential safety concerns with helicopter shopping. Helicopter shopping refers to the practice of calling, in sequence, various providers until a provider agrees to take a flight assignment. Stakeholders who support the existing regulatory and oversight framework noted that there are situations where calling additional providers is an appropriate and safe use of resources. (See table 1.) Having information on prior turndowns or aborted missions could help a provider decide whether it is safe to fly. FAA has provided state EMS officials with a sample letter that could be given to dispatchers within their state that outlines sample communications policies, including policies on disclosing information about prior turndowns. However, even with information on prior turndowns, pilots are responsible for checking weather conditions and determining if the conditions meet FAA’s requirements for flying. NASEMSO representatives suggested that the time spent sequentially calling additional air ambulance providers consumes time during which a patient could be en route to a trauma center via ground ambulance. Call jumping occurs when a provider sends an air ambulance to an accident scene without a request. If another air ambulance provider is also responding based on a request from first responders, there is a heightened risk of collision. Stakeholders who advocate for an increase in state regulatory authority maintain that, like helicopter shopping, call jumping can result from economic pressure to fly. However, some instances perceived as call jumping may stem from a lack of communication among first responders. (See table 2.) To minimize the risk of two helicopters responding based on separate requests from first responders, states can establish communication and coordination protocols to be followed at the more than 6,000 public safety answering points, or 911 call centers, nationwide. These centers provide the opportunity to coordinate air ambulance requests and avoid dispatching two air ambulances to the same crash scene. However, these centers are locally based and operated, and their structure varies widely. Beyond anecdotes, we found little evidence of helicopter shopping resulting in unsafe flights or of call jumping. We identified FAA’s Aviation Safety Reporting System (ASRS) as a potential source for such information. As a voluntary reporting system, ASRS contains reporting biases reflecting that not all participants in the aviation system are equally aware of ASRS or equally willing to file reports. Consequently, ASRS statistics represent a conservative measure of the number of such events that are occurring. In our review of 464 air ambulance reports submitted to ASRS over 15 years, we found 2 that contained information about call jumping and none that described instances of helicopter shopping. These data could indicate that helicopter shopping and call jumping occur infrequently. On the other hand, these practices may be underreported if air ambulance crews are unaware that they can report safety issues to ASRS. During the summer of 2010, the Center for Leadership, Innovation and Research in EMS established the EMS Voluntary Event Notification Tool (EVENT)—an anonymous, non-punitive and confidential web-based system that allows anyone in the United States or Canada to report an event or action that leads to or has the potential to lead to a worsened patient outcome. Reports received in EVENT are sent to the EMS governing body of the state, territory or province responsible for the EMS system in which the event occurred. Once the governing body receives the anonymous notification, they would be encouraged to address systemic issues in order to improve the overall quality of care provided. As of September 1, 2010, EVENT had received one report. While it is too early to evaluate the impact of the EVENT reporting system, it appears to be a positive step that could provide useful data for state regulators. FAA is in the process of addressing several NTSB recommendations related to safety issues that NTSB has made regarding helicopter air ambulance safety. FAA officials expect to release a notice of proposed rulemaking in the fall of 2010 that would address issues such as additional safety equipment requirements, minimum acceptable weather conditions, use of risk management practices, and additional training requirements. Stakeholders concerned with the growth of the industry assert that economic pressures have led some air ambulance providers to cut costs by using smaller, less expensive helicopters and less experienced medical crews. In particular, they point to the use of small, single-engine helicopters instead of twin-engine helicopters. According to these stakeholders, larger helicopters allow access to the patient’s entire body, while the smaller helicopters that some providers use restrict medical access to the full body of the patient. However, single-engine helicopters are not always smaller than twin-engine helicopters. During our site visits, we observed how patients were transported in one particular single-engine helicopter. We also saw that medical personnel had access to the patient’s upper body, which facilitates airway management, an important component of prehospital care. (See fig. 5.) The patient’s lower body is situated next to the pilot with a transparent barrier separating the patient and the pilot. A senior official at that provider agreed that the space inside the helicopter is limited but said the helicopter meets the medical needs of most patients. However, there are differing perspectives in the industry about the need to have access to a patient’s entire body during transport. Stakeholders concerned with growth in the industry told us that small helicopters generally lack climate control, which results in temperatures in the aircraft that may be either too cold or too hot. According to an experienced emergency medical technician-paramedic, air that is too cold has a bad effect on trauma patients, while air that is too hot has a bad effect on cardiac patients. Stakeholders who favor the existing regulatory and oversight framework point out that the need for climate control might vary depending on the region in which the air ambulance operates. An air ambulance provider that operates in a southern climate may not need a heater, while one that operates in a northern climate may not need an air conditioner. One provider we visited that generally operates smaller helicopters told us that all of its 87 aircraft have heaters and are being outfitted with air conditioning as they undergo refurbishment. We were told that physicians and hospitals can exercise some degree of control over helicopter characteristics. For example, we were told that the requesting physician sometimes requires that an air ambulance have climate control when it is necessary for the medical care of the patient in interfacility transfers. One stakeholder we spoke with commented that physicians are often unaware that air ambulances may lack climate control and would therefore not be inclined to ask about it. According to a senior DOT official, the department was exploring whether regulation of climate control in air ambulance helicopters is under federal or state jurisdiction. Stakeholders concerned with the growth in the industry also argue that, to save on costs, some providers are hiring less experienced medical crews, which they maintain degrade patient services. We were unable to validate this argument through our literature synthesis. We identified seven studies on the impact of a medical crew’s composition—whether, for example, the crew consists of a physician and a nurse or a nurse and a paramedic—but there was no consensus on how the composition of the medical crew influences a patient’s outcome. We also found three studies examining the impact of crew composition on transport time, and all three studies found that crew composition had no impact on transport time. We found no studies examining the impact of a medical crew’s experience on patient outcomes. Several of the concerns raised by stakeholders within the air ambulance community appear to be outcomes of industry growth and competition. For example, concerns about helicopter shopping or call jumping might arise if providers are competing to gain business. Similarly, concerns about migration toward single-engine aircraft or reductions in the qualifications of medical staff might arise as companies seek to cut costs to improve profitability. The pressure of competing for business and working to obtain maximum efficiency through cost containment arises in nearly all business endeavors. These forces are usually good for consumers because they lead to efficiency, lower prices, and service offerings better tailored to the needs and desires of consumers. However, health care markets have some imperfections and these forces might work differently in these markets. For example, health care consumers may lack information about their diagnoses, treatment needs, the quality of different providers, as well as the prices charged by different providers. Additionally, health insurance can affect consumer’s ability or inclination to make informed health care choices. Air medical patients have limited influence on air medical markets and are not typically making the choice in terms of mode of transport or provider. For air ambulance services, medical outcomes are a critical measure of quality. Through our research, we identified numerous articles documenting rigorous research on various aspects of air ambulances, but very few shed light on the effect of the growth of the industry. For example, we found no studies that compare patient outcomes between states that have multiple providers in the same region, and states with fewer providers. Consequently, we were unable to draw definitive conclusions to support or refute many of the allegations that have been raised. DOT’s General Counsel and National Highway Traffic Safety Administration officials agreed that more data on many aspects of air ambulance operations would enlighten the debate about providing states greater regulatory authority over air ambulances. While there was consensus among the stakeholders in the industry that there is a lack of data about potential concerns, ACCT stated that the debate about the extent of state regulatory authority of air ambulances is fundamentally one of philosophical differences about the government’s role in controlling public services, such as emergency medical services. Because air ambulances have both an aviation component, regulated by the FAA, and a medical component, regulated by the states, the boundaries of federal and state regulation have come under question. The aviation components include the aircraft itself, including its airworthiness and safety, as well as the personnel who maintain and pilot the aircraft, communicate with ground personnel, and monitor flight instruments, while medical personnel attend to the health of the patient on board. (See fig 6.) These safety components are under the jurisdiction of FAA, which administers federal aviation regulations that govern safety and operational requirements, nationwide. Hence, the industry is subject to FAA safety regulations covering areas such as pilot training requirements, flight equipment, and aircraft configuration. The medical component, on the other hand, is under state regulatory authority. DOT opinion letters and federal and state court decisions have affirmed that states have the authority to enact and enforce requirements for medical services delivered to patients in air ambulances and for the medical staffing, personnel, and equipment used to deliver those services. States also have the authority to develop training on how to use an aircraft or equipment so as to ensure proper patient care. For example, such training might focus on how pressurization in the aircraft cabin affects specific medical conditions. As noted earlier, some stakeholders favor changing the regulatory and oversight framework so that states would have a stronger role in regulating the nature and scope of services that an air ambulance provider must offer. For example, state EMS officials believe that they should be able to determine the appropriate number of air ambulances serving a particular area and set additional standards in terms of equipment used and services provided, as they currently do for other parts of the EMS system. However, strengthening the states’ role would require federal legislation to alter the Airline Deregulation Act (ADA) of 1978 that deregulated the air carrier industry. Court decisions subsequent to the passage of the ADA determined that air ambulances were air carriers as defined by the ADA. In enacting the ADA, Congress determined that “maximum reliance on competitive market forces” would best further “efficiency, innovation, and low prices” as well as “variety quality ... of air transportation.” One ADA provision, designed to phase out state governments’ economic control over the industry, explicitly precludes state regulation of matters related to air carrier rates, routes, and services. Courts have ruled that this provision preempts states from acting in some regulatory areas, such as requiring prospective air ambulance providers to obtain a certificate of need based on the state’s assessment of the population to be served and the potential for unnecessary duplication of services. Over the past two decades, federal and state courts, and DOT, through opinion letters issued by its Office of General Counsel, have affirmed these authorities and have determined the specific issues that states can and can not regulate. (See table 3.) Dating as far back as 1986, courts have ruled that state certificate of need laws are unenforceable because they conflict with the ADA by limiting the number of air ambulance services doing business within the state. DOT, responding to numerous inquiries from state Attorneys General and private industry, has advised that certificate of need provisions and similar “public convenience and necessity” provisions are expressly preempted by the ADA because the states are attempting to regulate in the area of price, routes, and services. Most recently and prominently, a federal district court in North Carolina found that the state’s certificate of need requirement was preempted by ADA. These rulings are limited to specific states. Stakeholders concerned with the growth in the industry generally support a stronger role for states in regulating the air ambulance industry. They believe that many of the court rulings and DOT opinions diminish states’ ability to oversee patient care and safety. For example, DOT, in a letter to an attorney in the state of Hawaii, wrote that states cannot require, through regulation, that air ambulance providers operate on a 24/7 basis on the grounds that such a requirement constitutes economic regulation. These stakeholders view a requirement for air ambulances to operate on a 24/7 basis as a patient care issue that states should be able to control. DOT further stated in its letter that states could contract with air ambulance providers for these services. Under such circumstances, the states would be functioning as customers rather than regulators, and therefore not be subject to federal preemption of state regulation. In commenting on a draft of this report, ACCT and NASEMSO stated that contracting for air ambulance services in this manner is not a realistic option for states because of fiscal resource limitations. (See app. III for a complete description of significant federal and state court cases and DOT and state attorneys general opinions.) it is possible that a state medical program, ostensibly dealing with only medical equipment/supplies aboard aircraft, could be so pervasive or so constructed as to be indirectly regulating in the pre-empted economic area of air ambulance prices, routes or services. Stakeholders have expressed concern that the open-ended nature of this statement allows any medical regulation to be challenged as an economic regulation and thus be preempted under the ADA. However, it is important to note that DOT did not find that any specific “medical” regulation was preempted under this reasoning and has not yet found that any state regulation to date falls within this category. Stakeholders have raised concerns that there is no regulation at either the federal or state level to protect the public from the economic consequences of air ambulance practices. These stakeholders also expressed concerns about areas of state regulation that create uncertainty because DOT and the federal and state courts have yet to rule on them, such as a requirement for climate control on air ambulances. Uncertainty about how the courts would rule has led to calls for a federal legislative solution that would spell out federal and state authorities. Several federal legislative proposals seek to clarify the states’ role in regulating medical issues and to allow the states to institute certain types of economic regulation for air ambulances, including certificate of need requirements, by carving out an exception to the ADA’s preemption of state regulation of prices, routes, and services. However, the current scheme of regulation of air ambulances has been in place since 1978 and has generated four significant court decisions that, for the most part, have addressed fact- specific questions about the relationship between federal and state authority to oversee and regulate the industry. DOT has stated that the continued use of case-by-case departmental determinations can still clarify the appropriate role of states in regulating air ambulance services. DOT officials told us that states should address their uncertainties to DOT, and the department is more than willing to respond with an opinion based on the facts and circumstances presented. However, it appears that states have not fully utilized this option. Since 1986, DOT has issued only eight opinion letters in response to inquiries on the limits of federal and state authority over air ambulances. Stakeholders favoring increased state regulatory authority have expressed concerns with continuance of this case-by-case approach, stating that it results in piecemeal guidance, inconsistency, and confusion. DOT officials have also raised concerns that allowing states to exert authority, in this case in the economic area, could create a patchwork of state regulation disrupting what has been, until now, a fairly well- understood set of uniform rules. Moreover, DOT, along with the Federal Trade Commission and the Department of Justice, have expressed concern that state authority to implement certificate of need laws could be used to limit market entry for air ambulances and reduce competition in the air ambulance industry—an outcome Congress sought to avoid when enacting the ADA. We provided a draft of this report to the Departments of Transportation (DOT) and Health and Human Services (HHS), and the National Transportation Safety Board (NTSB) for comment. We also invited representatives from the Association of Air Medical Services (AAMS), the Association for Critical Care Transport (ACCT), the Air Medical Operators Association (AMOA) and the National Association of State EMS Officials (NASEMSO) to review a draft of this report and provide comments. There was a consensus among the reviewers that there is a lack of data about the air ambulance industry and a recognition that the study had to rely on available data and information, which we obtained by conducting a comprehensive review of the existing subject area literature and recording stakeholder comments and opinions. Further, this lack of empirical evidence limited our ability to determine the full impact of changes in the industry. Our research of the air ambulance industry and discussions with stakeholders within the industry identified two distinct perspectives about the impact of the changes. To the extent that data or other information was available, we provided it to inform these perspectives. Where data or other information did not exist, we clearly attributed statements and identified the perspective of the stakeholders making the comment. DOT’s Office of General Counsel and HHS provided technical comments that we incorporated as appropriate. NHTSA, within DOT, provided detailed comments that we also incorporated as appropriate. NTSB transmitted written comments to us in a letter. (See app. IV). NTSB’s statement in its letter that GAO was asked to “review the U.S. air ambulance industry to determine if changes in oversight authority are needed” is not accurate. As stated in the report, the objectives of our work were to examine how the air ambulance industry had changed over the last decade and the implications of these changes, as well as to examine the relationship between the federal and state oversight and regulation of the industry. While our report contains information that may be used when considering whether changes in oversight authority for the air ambulance industry may be needed, we were not asked to determine if changes are needed and thus do not address this question in our report. NTSB identified three issues that it believed should be discussed in more detail in our report. First, NTSB noted that the draft should have addressed in greater detail that competition in the air ambulance industry is restricted because of fixed fee reimbursements by payers (private insurers, Medicare, and Medicaid) for air ambulance services and the industry’s limited capacity to adjust prices. While Medicare and Medicaid reimbursement rates are fixed, private sector prices are not. As is the case with most health care services, air ambulance providers generally negotiate prices with insurance companies. NTSB further noted that such restricted competition could be linked to safety concerns. Following the Board’s February 2009 public hearing on air ambulance helicopter safety, NTSB issued several safety recommendations, including one to HHS to determine if reimbursement rates should differ according to the level of air ambulance transport safety provided. In response, HHS stated that it did not believe that payment should vary based on the level of transport safety provided but that all air ambulance operators should meet minimum FAA safety standards. Second, NTSB also noted said that the draft did not clearly state whether there is evidence that helicopter shopping and call jumping occur, and if so, to what extent. In response, we clarified that beyond anecdotes, we found little evidence of helicopter shopping resulting in unsafe flights or of call jumping. NTSB additionally raised questions about our use of ASRS as a source of information regarding the prevalence of these practices. We agree that ASRS has limitations, and opted to include it in the report because it is one of the few available data sources with information applicable to the industry. We added additional information in the report about the limitations and potential under- reporting. Finally, NTSB noted that it would be helpful to know if there is evidence to support the belief that the use of air ambulances improves the chances of survival for trauma victims and other critical patients. It was not our objective to determine if air ambulance transport is beneficial and we did not do the research necessary to comment on the validity of the belief. AAMS provided technical comments which we incorporated where appropriate. Comments provided by ACCT, AMOA, and NASEMSO were generally reflective of their views regarding the implications of the changes in the air ambulance industry and the role of states in regulating the industry. We incorporated their comments throughout the report as appropriate. We are sending copies of this report to the appropriate congressional committees, DOT, the Department of Health and Human Services, NTSB, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The scope of our review was the structure and practices of the air ambulance industry in the United States, and framework for overseeing and regulating U.S. air ambulance services. To determine how the U.S. air ambulance industry changed from 1999 through 2008, we obtained and analyzed available data that provided information on the growth and evolution of the industry, including shifts in business models, and the types of air ambulance aircraft that are used to provide services. Specifically, we reviewed and analyzed data compiled by Ira Blumen, MD. Dr. Blumen is the Medical/Program Director, University of Chicago Aeromedical Network. His database on the air ambulance industry extends back to 1980 and includes the number of helicopter air ambulances used in the industry and the number patients transported. We also reviewed and analyzed the data contained in the Atlas and Database of Air Medical Services (ADAMS) and interviewed a senior official at the Calspan-University of Buffalo Research Center (CUBRC), the research organization that maintains and publishes the database in partnership with the Association of Air Medical Services (AAMS). ADAMS has been annually updated since 2004 and serves as a centralized source of information on air medical service providers, including the number and location of air ambulance helicopter bases. ADAMS began including data for fixed-wing air ambulances in 2007. At our request, CUBRC also provided us with an update of the types of helicopters used in the air ambulance industry. We also obtained and analyzed data on Medicare payments to air ambulance providers from the Centers for Medicare and Medicaid, the agency within the Department of Health and Human Services that administers the Medicare and Medicaid programs. Additionally we reached out to more than 400 air ambulance providers, industry associations, and state Emergency Medical Services officials asking that they provide us any data, information, published or unpublished reports, papers, articles, or other potentially relevant sources of information of which they would like us to be aware. To determine how the industry has evolved, we examined industry, National Transportation Safety Board, and stakeholder documents. To determine the implications of these changes for the availability of services, efficient use of air ambulance resources, safety, and services provided, we undertook an extensive literature synthesis covering over 250 articles describing scholarly research that produced quantitative results. For more detailed information on the literature synthesis, see appendix II. We attended two semiannual meetings of the Federal Interagency Committee on Emergency Medical Services. We also reviewed previous GAO reports; Federal Aviation Administration (FAA) documents; the transcript of a 2009 National Transportation Safety Board’s (NTSB) hearing on helicopter emergency medical services and the board’s recommendation letters; a congressional hearing transcript; and congressional testimonies, reports, and position papers published by AAMS and other stakeholder associations; and published documents of the Foundation for Air-Medical Research and Education, and the Flight Safety Foundation. In addition, we conducted interviews with representatives of AAMS; industry stakeholders who favor changing the regulatory and oversight framework, including representatives of the Association for Critical Care Transport (the leading proponent of change in the regulatory and oversight framework) and industry stakeholders who oppose changing the regulatory and oversight framework, including representatives of the Air Medical Operators Association (the key industry group favoring the existing regulatory and oversight framework). We also conducted four site visits to air ambulance providers that reflected differing geographic locations, business models, and opinions about regulatory structure. Specifically, we observed operations at a government-provided air ambulance service operated by Maryland State Police, a hospital-based air ambulance service in the mid- Atlantic region (MedStar Transport) and independent providers headquartered in Missouri (Air Evac Lifeteam) and Maine (LifeFlight of Maine). Air Evac LifeTeam’s management favors the existing regulatory structure, while LifeFlight of Maine’s management advocates increased state regulation of the air ambulance industry. We also met with representatives of Dartmouth-Hitchcock Advanced Response Team, which is a hospital-based provider, and Boston Medflight, which is consortium- owned. To determine the relationship between federal and state oversight and regulation of the air ambulance industry, we reviewed federal aviation laws, and the Airline Deregulation Act (ADA) of 1978, and challenges to state authority to regulate in matters that are federally preempted under these acts. We also reviewed Department of Transportation (DOT) General Counsel letters and state attorneys general opinion letters to state officials or attorneys. We discussed these letters, which interpret provisions of the ADA, with DOT General Counsel officials. We also discussed the implications of industry trends and federal and state regulatory authority with the key industry stakeholders mentioned above, as well as with officials at the Federal Aviation Administration (FAA); the National Highway Traffic Safety Administration; NTSB; and representatives of the National Association of State Emergency Medical Services Officials. We also received briefings and reviewed documents provided by proponents and opponents of increased state regulation. To identify and evaluate literature and studies that contain empirical data related to the air ambulance industry, we conducted a literature synthesis. Our objective was to identify any studies with empirical data related to air ambulance availability, services provided in the air ambulance, competition, and cost. We initially searched for articles published in the preceding 5 years, from January 2005 to January 2010. The search focused on the safety, cost, quality, and oversight of air ambulance services, including studies and articles that addressed the issues of helicopter shopping and call jumping. The search statements included a variety of terms to capture materials that examined these issues. We queried various bibliographic research databases including ProQuest, AcademicOneFile, MEDLINE, Dialog Transportation and Transportation Business, Electronic Collections Online, Nexis for scholarly and trade literature, Congressional Research Service, Congressional Budget Office, GAO, Government Printing Office, National Technical Information Service databases for publications produced by or funded by the federal government, PolicyFile, and WorldCat for government publications and literature that is not published commercially or is not generally accessible. The results of this search, combined with articles obtained through discussions with stakeholders in the air ambulance industry, Internet searches, and our review of air ambulance-related Web sites, yielded 36 relevant studies. As the job progressed, and the dearth of quantifiable data became evident, we expanded our search criteria to include all articles published between January 1, 2000, and May 2010, which contained empirically derived results. We determined that this time frame would include studies performed prior to the proliferation of helicopters in the air ambulance industry that started occurring around 2002-2003. In this search, we looked at air ambulances in a broader context and aimed to be more comprehensive than in previous searches. Search statements relied primarily on subject terms (when available) for air ambulances and similar concepts and did not include any other search terms as modifiers. The databases searched were Nexis Statistical Master File, ProQuest, Academic OneFile, GAO, MEDLINE, Biosis, SciSearch, Cumulative Index to Nursing and Allied Health Literature, EMBASE, PASCAL, Gale Group Health and WellnessDatabase, National Technical Information Service, TRIS, Government Printing Office, Electronic Collections Online, and Ovid. The librarian reviewed the search results and removed duplicate citations, foreign air ambulance service, military based, or medical procedure studies, and nonrelevant articles. A total of 641 citations were sent to the team for review. The team reviewed all the titles sent by the librarian. Articles with no abstract were excluded due to lack of empirical findings, high probability of article pertaining to current events, or an editorial commentary of current policy issues. For articles with abstracts, two team members independently reviewed the abstract to determine if the article addressed the previously identified topics and appeared to contain empirical data. If both reviewers agreed that the article was relevant or not relevant, the article was saved or rejected accordingly. When the reviewers disagreed, a third team member reviewed the abstract and made the final decision. The team requested that the librarians obtain complete copies of all saved, relevant articles. This process yielded 91 relevant studies. All relevant full text studies underwent three reviews—first by an analyst who synthesized the study, second an initial review by a methodologist, and the third and final review by a second methodologist. The methodologists determined whether the research was sufficiently rigorous to support the stated conclusions. Articles that were not based on U.S. populations or did not include empirical data were excluded. Relevant articles were summarized in a synthesis document that captured the title, authors, setting, sponsor of the research, methods, findings and conclusions, and limitations. The team reviewed the bibliography for relevant articles synthesized in step 3 to identify additional potentially relevant articles. The team then selected articles from the bibliographies that appeared relevant and were (1) in English, (2) not based on a foreign population, (3) not international studies, and (4) not military studies. For articles that met these criteria, the team attempted to obtain the abstracts from the National Institute of Health’s, National Library of Medicine PubMed database (http://www.ncbi.nlm.nih.gov/pubmed). The team then repeated the abstract review, synthesis, and bibliography review process one additional time (see fig. 7). With a methodologist’s help, the team analyzed and aggregated the synthesized articles to develop narratives describing the findings of the literature. Table 4 summarizes key court cases related to the air ambulance industry. Table 5 summarizes DOT or state Attorneys General Opinions related to the air ambulance industry. In addition to the contact named above, Maria Edelstein, Assistant Director; Edmond Menoche, Senior Analyst; Amy Abramowitz; Heather Bartholomew; Owen Bruce; Christine Brudevold; Leia Dickerson; Leslie Gordon; David Hooper; Karla Lopez; Ashley McCall; Sara Ann Moessbauer; Cynthia Saunders; and Kristin VanWychen made key contributions to this report. Arfken, C.L., M.J. Shapiro, P.Q. Bessey, and B. Littenberg. “Effectiveness of Helicopter Versus Ground Ambulance Services for Interfacility Transfer.” The Journal of Trauma, Injury, Infection, and Critical Care vol. 45, no. 4 (1998): 785. Baack, B.R., E.C. Smoot, J.O. Kucan, L. Riseman, and J.F. Noak. “Helicopter Transport of the Patient with Acute Burns.” The Journal of Burn Care & Rehabilitation vol. 12 no. 3(1991): 229-233, http://www.ncbi.nlm.nih.gov/pubmed/1885639. Baxt, W.G. and P. Moody. “The Impact of Advanced Prehospital Emergency Care on the Mortality of Severely Brain-Injured Patients.” The Journal of Trauma vol. 27 no. 4 (1987): 365-369, http://www.ncbi.nlm.nih.gov/pubmed/3573084. Baxt, W.G. and P. Moody. “The Impact of Rotorcraft Aeromedical Emergency Care Service on Mortality.” JAMA: The Journal of American Medical Association vol. 249 no. 22 (1983): 3,047-3,051. Baxt, W. G. and P. Moody. “The Impact of a Physician as Part of the Aeromedical Prehospital Team in Patients with Blunt Trauma.” JAMA: The Journal of the American Medical Association vol. 257 no. 23 (1987): 3,246-3,250. Benson, N.H., R.L. Alson, E.G. Norton, A.P. Beauchamp, R. Weber, and J.L. Carreras. “Air Medical Transport Utilization Review in North Carolina.” Prehospital and Disaster Medicine vol. 8 no. 2 (1993): 133-137. Berns, K.S., J.J. Caniglia, D.G. Hankins, and S.P. Zietlow. “Use of the Autolaunch Method of Dispatching a Helicopter.” Air Medical Journal vol. 22 no. 3 (2003): 35-41. Bledsoe, B.E., A.K. Wesley, M. Eckstein, T.M. Dunn, and M.F. O’Keefe. “Helicopter Scene Transport of Trauma Patients with Nonlife-Threatening Injuries: A Meta-Analysis.” Journal of Trauma-Injury Infection and Critical Care vol. 60 no. 6 (2006): 1,257-1,265. Boyd, C.R., K. Corse, and R.C. Campbell. “Emergency Interhospital Transport of the Major Trauma Patient: Air Versus Ground.” Journal of Trauma vol. 29 no. 6 (1989): 789-793. Branas, C.C., E.J. MacKenzie, J.C. Williams, H.M. Teeter, M.C. Flanigan, A.J. Blatt and C.S. ReVelle. “Access to Trauma Centers in the United States.” JAMA: Journal of American Medical Association vol. 293 no. 21 (2005): 2,626-2,633. Brathwaite, C.E., M. Rosko, R. McDowell, J. Gallegher, J. Proenca, and M.A. Spott. “A Critical Analysis of on-Scene Helicopter Transport on Survival in a Statewide Trauma System.” Journal of Trauma vol. 45 no. 1(1998): 140-144. Burney, R.E., D. Hubert, L. Passini, and R. Maio. “Variation in Air Medical Outcomes by Crew Composition: A Two-Year Follow-Up.” Annals of Emergency Medicine vol. 25 no. 2 (1995): 187-192. Burney, R.E., L. Passini, D. Hubert, and R. Maio. “Comparison of Aeromedical Crew Performance by Patient Severity and Outcome.” Annals of Emergency Medicine vol. 21 no. 4 (1992): 375-378. Burney, R.E., K.J. Rhee, R.G. Cornell, M. Bowman, D. Storer, and J. Moylan. “Evaluation of Hospital-Based Aeromedical Transport Programs using Therapeutic Intervention Scoring.” Aviation, Space, and Environmental Medicine vol. 59 no. 6 (1988): 563-566, http://www.ncbi.nlm.nih.gov/pubmed/3390116. Carr, B.G., J.M. Caplan, J.P. Pryor, and C.C. Branas. “A Meta-Analysis of Prehospital Care Times for Trauma.” Prehospital Emergency Care vol. 10 no. 2 (2006): 198-206. Celli, P., A. Fruin, and L. Cervoni. “Severe Head Trauma: Review of the Factors Influencing the Prognosis.” Minerva Chirurgica vol. 52 no. 12 (1997): 1,467-1,480. Chappell, V.L., W.J. Mileski, S.E. Wolf, and D.C. Gore. “Impact of Discontinuing a Hospital-Based Air Ambulance Service on Trauma Patient Outcomes.” Journal of Trauma vol. 52 no. 3 (2002): 486-491. Cocanour, C.S., R.P. Fischer, and C.M. Ursic. “Are Scene Flights for Penetrating Trauma Justified?” The Journal of Trauma vol. 43 no. 1 (1997): 83-86. Cook, C.H., P. Muscarella, A.C. Praba, W.S. Melvin, and L.C. Martin. “Reducing Overtriage without Compromising Outcomes in Trauma Patients.” Archives of Surgery (Chicago, Ill.: 1960) vol. 136 no. 7 (2001): 752-756, http://www.ncbi.nlm.nih.gov/pubmed/11448384. Cudnik, M.T., C.D.Newgard, H. Wang, C. Bangs, and R. Herrington. “Distance Impacts Mortality in Trauma Patients with an Intubation Attempt.” Prehospital Emergency Care vol. 12 no. 4 (2008): 459-466. Cunningham, P., R. Rutledge, C. Baker, and T. Clancy. “A Comparison of the Association of Helicopter and Ground Ambulance Transport with the Outcome of Injury in Trauma Patients Transported from the Scene.” The Journal of Trauma, Injury, Infection, and Critical Care vol. 43 no. 6 (1997): 940. Davis, D.P., J. Peay, B. Good, M.J. Sise, F. Kennedy, A.B. Eastman, T. Velky, and D.B. Hoyt. “Air Medical Response to Traumatic Brain Injury: A Computer Learning Algorithm Analysis.” Journal of Trauma vol. 64 no. 4 (2008): 889-897. Davis, D.P., J. Peay, J.A. Serrano, C. Buono, G.M. Vilke, M.J. Sise, F. Kennedy, A.B. Eastman, T. Velky, and D.B. Hoyt. “The Impact of Aeromedical Response to Patients with Moderate to Severe Traumatic Brain Injury.” Annals of Emergency Medicine vol. 46 no. 2 (2005): 115-122. Davis, D.P., J. Stern, M. Ochs, M.J. Sise, and D.B. Hoyt. “A Follow-Up Analysis of Factors Associated with Head-Injury Mortality After Paramedic Rapid Sequence Intubation.” Journal of Trauma-Injury Infection & Critical Care vol. 59 no. 2 (2005): 484-488. Eckstein, M., T. Jantos, N. Kelly, and A. Cardillo. “Helicopter Transport of Pediatric Trauma Patients in an Urban Emergency Medical Services System: A Critical Analysis.” Journal of Trauma vol. 53 no. 2 (2002): 340- 344. Emerson, C. and D.L. Funk. “Automatic Helicopter Standby Policy for Seriously Injured Patients.” Air Medical Journal vol. 22 no. 4 (2003): 32-35. Falcone, R.E., R. Johnson, and R. Janczak. “Is Air Medical Scene Response for Illness Appropriate?” Air Medical Journal vol. 12 no. 6 (1993): 191, 193-195; http://www.ncbi.nlm.nih.gov/pubmed/10128289. Fromm, Jr, E. Hoskins, L. Cronin, C.M. Pratt, W.H. Spencer III, and R. Roberts. “Bleeding Complications Following Initiation of Thrombolytic Therapy for Acute Myocardial Infarction: A Comparison of Helicopter- Transported and Nontransported Patients.” Annals of Emergency Medicine vol. 20 no. 8 (1991): 892-895, http://www.sciencedirect.com/science/article/B6WB0-4FR609K- 1J6/2/0e35eca494a8869353393ec49cfca80. Fromm, R.E., R. Haider, P. Schlieter, and L.A. Cronin. “Utilization of Specialized Services by Air Transported Cardiac Patients: An Indicator of Appropriate use.” Aviation, Space, and Environmental Medicine vol. 63 no. 1 (1992): 52-55, http://www.ncbi.nlm.nih.gov/pubmed/1550534. Gabram, S.G., S. Stohler, R.K. Sargent, R.J. Schwartz, and L.M. Jacobs. “Interhospital Transport Audit Criteria for Helicopter Emergency Medical Services.” Connecticut Medicine vol. 55 no. 7 (1991): 387-392, http://www.ncbi.nih.gov/pubmed/1935060. Hamman, B.L., J.I. Cue, F.B. Miller, D.A. O’Brien, T. House, H.C. Polk Jr, and J.D. Richardson. “Helicopter Transport of Trauma Victims: Does a Physician make a Difference?” The Journal of Trauma vol. 31 no. 4 (1991): 490-494. Härtl, R., L.M. Gerber, L. Iacono, Q. Ni, K. Lyons, and J. Ghajar. “Direct Transport within an Organized State Trauma System Reduces Mortality in Patients with Severe Traumatic Brain Injury.” The Journal of Trauma vol. 60 no. 6 (2006): 1,250-1,256, http://www.ncbi.nlm.nih.gov/pubmed/16766968. Housel, F.B., D. Pearson, K.J. Rhee, and J. Yamada. “Does the Substitution of a Resident for a Flight Nurse Alter Scene Time?” The Journal of Emergency Medicine vol. 13 no. 2 (1995): 151-153, http://www.ncbi.nlm.nih/gov/pubmed/7775784. Jacobs, L.M., R.J. Schwartz, B.B. Jacobs, D. Gonsalves, and S.G. Gabram. “A Three-Year Report of the Medical Helicopter Transportation System of Connecticut.” Connecticut Medicine vol. 53 no. 12 (1989): 703-710. Johnson, R. and R.E. Falcone. “Air Medical Response for Illness Revisited.” Air Medical Journal vol. 14 no. 1 (1995): 11-14, http://www.ncbi.nlm.nih.gov/pubmed/10140972. Kerr, W.A., T.J. Kerns, and R.A. Bissell. “Differences in Mortality Rates among Trauma Patients Transported by Helicopter and Ambulance in Maryland.” Prehospital and Disaster Medicine vol. 14 no. 3 (1999): 159- 164. King, D.R., M.P. Ogilvie, M.T. Pereira Bruno, Y. Chang, R.J. Manning, J.A. Conner, C.I. Schulman, M.G. McKenney, and K.G. Proctor. “Heart Rate Variability as a Triage Tool in Patients with Trauma during Prehospital Helicopter Transport.” Journal of Trauma vol. 67 no. 3 (2009): 436-440. Klauber, M.R., L.F. Marshall, B.M. Toole, S.L. Knowlton, and S.A. Bowers. “Cause of Decline in Head-Injury Mortality Rate in San Diego County, California.” Journal of Neurosurgery vol. 62 no. 4 (1985): 528-531, http://www.ncbi.nlm.nih.gov/pubmed/3973722. Koury, S.I., L. Moorer, C.K. Stone, J.S. Stapczynski, and S.H. Thomas. “Air Vs Ground Transport and Outcome in Trauma Patients Requiring Urgent Operative Interventions.” Prehospital Emergency Care vol. 2 no. 4 (1998): 289-292, http://www.ncbi.nlm.nih.gov/pubmed/9799016. Lerner, E.B., A.J. Billittier, J.M. Dorn, and Y.W. Wu. “Is Total Out-of- Hospital Time a Significant Predictor of Trauma Patient Mortality?” Academic Emergency Medicine vol. 10 no. 9 (2003): 949-954. Mango, N. and E. Garthe. “Statewide Tracking of Crash Victims’ Medical System Utilization and Outcomes.” Journal of Trauma–Injury, Infection and Critical Care vol. 62 no. 2 (2007): 436-460. Mann, N.C., K.A. Pinkney, D.D. Price, and D. Rowland. “Injury Mortality Following the Loss of Air Medical Support for Rural Interhospital Transport.” Academic Emergency Medicine vol. 9 no. 7 (2002): 694, http://proquest.umi.com/pqdweb?did=140733811&Fmt=7&clientId=20485& RQT=309&VName=PQD. McCowan, C.L., E.R. Swanson, F. Thomas, and D.L. Handrahan. “Outcomes of Pediatric Trauma Patients Transported from Rural and Urban Scenes.” Air Medical Journal vol. 27 no. 2 (2008): 78-83. McCowan, C.L., E.R. Swanson, F. Thomas, and S. Hartsell. “Scene Transport of Pediatric Patients Injured at Winter Resorts.” Prehospital Emergency Care vol. 10 no. 1 (2006): 35, http://proquest.umi.com/pqdweb?did=1107232921&Fmt=7&clientId=20485 &RQT=309&VName=PQD. McCowan, C.L., F. Thomas, E.R. Swanson, S. Hartsell, J. Cortez, S. Day, and D.L. Handrahan. “Transport of Winter Resort Injuries to Regional Trauma Centers.” Air Medical Journal vol. 25 no. 1 (2006): 26-34. Moront, M.L., C.S. Gotschall, and M.R. Eichelberger. “Helicopter Transport of Injured Children: System Effectiveness and Triage Criteria.” Journal of Pediatric Surgery vol. 31 no. 8 (1996): 1,183-1,186. Moylan, J.A., K.T. Fitzpatrick, A.J. Beyer, and G.S. Georgiade. “Factors Improving Survival in Multisystem Trauma Patients.” Annals of Surgery vol. 207 no. 6 (1988): 679-685. Murphy, M.S., S.H. Thomas, P. Borczuk, and S.K. Wedel. “Reduced Emergency Department Stabilization Time before Cranial Computed Tomography in Patients Undergoing Air Medical Transport.” Air Medical Journal vol. 16 no. 3 (1997): 73-75, http://www.sciencedirect.com/science/article/B75B6-4CC83H0- N/2/440a7d7b5c72c40fb042d45a5d73e738. Norton, R., E. Wortman, L. Eastes, M. Daya, J. Hedges, and J. Hoyt. “Appropriate Helicopter Transport of Urban Trauma Patients.” The Journal of Trauma vol. 41 no. 5 (1996): 886-891. O’Malley, R.J., and M. Watson-Hopkins. “Monitoring the Appropriateness of Air Medical Transports.” Air Medical Journal vol. 13 no. 8(1994): 323- 325. Owen, J.L., Phillips, R.T., Conaway, C., and Mullarkey, D. “One Year’s Trauma Mortality Experience at Brooke Army Medical Center: Is Aeromedical Transportation of Trauma Patients Necessary?” Military Medicine vol. 164 no. 5 (1999): 361-365. Pettett, G., G.B. Merenstein, F.C. Battaglia, L.J. Butterfield, and R. Efird. “An Analysis of Air Transport Results in the Sick Newborn Infant: Part I. the Transport Team.” Pediatrics vol. 55 no. 6 (1975): 774-782, http://pediatrics.aappublications.org/cgi/content/abstract/55/6/774. Poste, J.C., D.P. Davis, M. Ochs, G.M. Vilke, E.M. Castillo, J. Stern, and D.B. Hoyt. “Air Medical Transport of Severely Head-Injured Patients Undergoing Paramedic Rapid Sequence Intubation.” Air Medical Journal vol. 23 no. 4 (2004): 36-40, http://www.ncbi.nlm.nih.gov/pubmed/15224081. Purtill, M., K. Benedict, T. Hernandez-Boussard, S.I. Brundage, K. Kritayakirana, J.P. Sherck, A. Garland, and D.A. Spain. “Validation of a Prehospital Trauma Triage Tool: A 10-Year Perspective.” Journal of Trauma vol. 65 no. 6 (2008): 1,253-1,257. Rhee, K. J., M. Strozeski, R.E. Burney, J.R. Mackenzie, and K. LaGreca- Reibling. “Is the Flight Physician Needed for Helicopter Emergency Medical Services?” Annals of Emergency Medicine vol. 15 no. 2 (1986): 174-177. Rodenberg, H. “The Revised Trauma Score: A Means to Evaluate Aeromedical Staffing Patterns.” Aviation, Space, and Environmental Medicine vol. 63 no. 4 (1992): 308-313, http://www.ncbi.nlm.nih.gov/pubmed/1610343. Saffle, J.R., L. Edelman, and S.E. Morris. “Regional Air Transport of Burn Patients: A Case for Telemedicine?” Journal of Trauma vol. 57 no. 1 (2004): 57-64. Safford, S.D., T.Z. Hayward, K.M. Safford, G.S. Georgiade, H.E. Rice, and M.A. Skinner. “A Cost and Outcomes Comparison of a Novel Integrated Pediatric Air and Ground Transportation System.” Journal of the American College of Surgeons vol. 195 no. 6 (2002): 790-795. Savitsky, E. and H. Rodenberg. “Prediction of the Intensity of Patient Care in Prehospital Helicopter Transport: Use of the Revised Trauma Score.” Aviation, Space, and Environmental Medicine vol. 66 no. 1 (1995): 11-14, http://www.ncbi.nlm.nih.gov/pubmed/7695544. Schiller, W.R., R. Knox, H. Zinnecker, M. Jeevanandam, M. Sayre, J. Burke, and D.H. Young. “Effect of Helicopter Transport of Trauma Victims on Survival in an Urban Trauma Center.” The Journal of Trauma vol. 28 no. 8 (1988): 1,127-1,134. Schwartz, R.J., L.M. Jacobs, and R.J. Juda. “A Comparison of Ground Paramedics and Aeromedical Treatment of Severe Blunt Trauma Patients.” Connecticut Medicine vol. 54 no. 12 (1990): 660-662. Snow, N., C. Hull, and J. Severns. “Physician Presence on a Helicopter Emergency Medical Service: Necessary Or Desirable?” Aviation, Space, and Environmental Medicine vol. 57 no. 12 (1986): 1,176-1,178, http://www.ncbi.nlm.nih.gov/pubmed/3800817. Stohler, S.A., R.J. Schwartz, R. Kent Sargent, and L.M. Jacobs. “Quality Assurance in the Connecticut Helicopter Emergency Medical Service.” Journal of Air Medical Transport vol. 10 no. 8 (1991): 7-11. Strong, C., R. Hunt, and J. Sousa. “Interhospital Transfer of Cardiac Patients: Does Air Transport make a Difference?” Air Medical Journal vol. 13 no. 5 (1994): 159. Talving, P., P.G.R. Teixeira, G. Barmparas, J. Dubose, K. Inaba, L. Lam, and D. Demetriades. “Helicopter Evacuation of Trauma Victims in Los Angeles: Does it Improve Survival?” World Journal of Surgery vol. 33 no. 11 (2009): 2,469-2,476. Thomas, S.H., T.H. Harrison, W.R. Buras, W. Ahmed, F. Cheema, and S.K. Wedel. “Helicopter Transport and Blunt Trauma Mortality: A Multicenter Trial.” Journal of Trauma vol. 52 no. 1 (2002): 136-145. Thomas, S.H., C.K. Stone, and D. Bryan-Berge. “The Ability to Perform Closed Chest Compressions in Helicopters.” The American Journal of Emergency Medicine vol. 12 no. 3 (1994): 296-295. Tiamfook-Morgan, T., C. Kociszewski, C. Browne, D. Barclay, S. Wedel, and S.H. Thomas. “Helicopter Scene Response: Regional Variation in Compliance with Air Medical Triage Guidelines.” Prehospital Emergency Care vol. 12 no. 4 (2008): 443, http://proquest.umi.com/pqdweb?did=1594822161&Fmt=7&clientId=20485 &RQT=309&VName=PQD. Urdaneta, L.F., M.K. Sandberg, A.E. Cram, T. Vargish, P.R. Jochimsen, D.H. Scott, and T.J. Blommers. “Evaluation of an Emergency Air Transport Service as a Component of a Rural EMS System.” The American Surgeon vol. 50 no. 4 (1984): 183-188. Urdaneta, L.F., B.K. Miller, B.J. Ringenberg, A.E. Cram, and D.H. Scott. “Role of an Emergency Helicopter Transport Service in Rural Trauma.” Archives of Surgery vol. 122 no. 9 (1987): 992-996. Williams, K.A., R. Aghababian, and M. Shaughnessy. “Statewide Helicopter Utilization Review: The Massachusetts Experience.” The Journal of Air Medical Transport vol. 9 no. 9 (1990): 14-16, 18-21, 23, http://www.ncbi.nlm.nih/gov/pubmed/10106233. Wirtz, M.H., C.G. Cayten, D.A. Kohrs, R. Atwater, and E.A. Larsen. “Paramedic Versus Nurse Crews in the Helicopter Transport of Trauma Patients.” Air Medical Journal vol. 21 no. 1 (2002): 17-21. Wuerz, R., J. Taylor, and J.S. Smith. “Accuracy of Trauma Triage in Patients Transported by Helicopter.” Air Medical Journal vol. 15 no. 4 (1996): 168-170.
|
Changes in the air ambulance industry's size and structure have led to differences of opinion about the implications for air ambulance use, safety, and services. Some industry stakeholders believe that greater state regulation would be good for consumers. While states can regulate the medical aspects of air ambulances, the Airline Deregulation Act (ADA) preempts states from economic regulation--i.e., regulating rates, routes, and services--of air ambulances. Other stakeholders view the industry changes as having been beneficial to consumers and see no need for a regulatory change. Asked to review the U.S. air ambulance industry, GAO examined (1) changes in the industry in the last decade and the implications of these changes on the availability of air ambulances and patient services and (2) the relationship between federal and state oversight and regulation of the industry. GAO analyzed available data about the industry; synthesized empirically based literature on the industry; visited four air ambulance providers with differing views on the industry changes; and interviewed federal and industry officials. From 1999 through 2008, the number of patients transported by helicopter air ambulance increased from just over 200,000 to over 270,000, or by about 35 percent, and the number of dedicated air ambulance helicopters increased from 360 to 677, or by about 88 percent. During the same period, the structure of the industry changed from a preponderance of providers affiliated with a specific hospital to a fairly even split between hospital-based and independent providers, often located outside hospitals, in suburban or rural communities. Perspectives on the implications of these changes vary. Supporters of the existing regulatory framework say that the growth in the number of helicopters provides, among other things, flexibility to perform aircraft maintenance on some helicopters while keeping others available to respond as needed. Proponents of a change in the regulatory framework maintain that the growth in helicopters has led to medically unnecessary flights. These stakeholders assert that high fixed costs create economic pressure to fly in unsafe weather and use less costly small helicopters that limit some patient services. GAO found few data that support either perspective. Court cases and advisory opinions from the Department of Transportation (DOT) have helped to clarify the relationship between federal and state oversight and regulation of the air ambulance industry, but DOT has acknowledged a continuing lack of clarity in some areas. Generally, the federal government has authority and oversight concerning the economic and safety aspects of the industry; states--which are preempted from regulating matters related to prices, routes, and services--have authority over the medical aspects. However, when both economic and medical or safety and medical issues are involved, questions about jurisdiction may arise. To resolve such questions, states have sought DOT's opinion and, in response, DOT has issued eight opinion letters since 1986. Some state officials have expressed concerns, particularly in relation to a DOT opinion letter on Hawaii laws, that the open-ended nature of the opinion could allow any medical regulation to be challenged as an economic regulation and thus be preempted under the ADA. States can continue to seek DOT's opinion on a case-by-case basis, as further questions surface. Additionally, states can also contract directly with air ambulance providers, which would allow states to control specific services as the customer. GAO is not making recommendations in this report. GAO incorporated comments on a draft this report from the appropriate federal agencies and key industry and emergency medical services stakeholders.
|
Over the last 15 years, the federal government’s increasing demand for IT has led to a dramatic rise in the number of federal data centers and a corresponding increase in operational costs. According to OMB, the federal government had 432 data centers in 1998 and more than 1,100 in 2009. Operating such a large number of centers is a significant cost to the federal government, including costs for hardware, software, real estate, and cooling. For example, according to the Environmental Protection Agency, the electricity cost to operate federal servers and data centers across the government is about $450 million annually. According to the Department of Energy, data center spaces can consume 100 to 200 times more electricity than a standard office space. According to OMB, reported server utilization rates as low as 5 percent and limited reuse of data centers within or across agencies lends further credence to the need to restructure federal data center operations to improve efficiency and reduce costs. Concerned about the size of the federal data center inventory and the potential to improve the efficiency, performance, and the environmental footprint of federal data center activities, OMB, under the direction of the Federal CIO, established FDCCI in February 2010. This initiative’s four high-level goals are to promote the use of “green IT” by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. As part of FDCCI, OMB required the 24 agencies to identify a senior, dedicated data center consolidation program manager to lead their agency’s consolidation efforts. In addition, agencies were required to submit an asset inventory baseline and other documents that would result in a plan for consolidating their data centers. The asset inventory baseline was to contain detailed information on each data center and identify the consolidation approach to be taken for each one. It would serve as the foundation for developing the final data center consolidation plan. The data center consolidation plan would serve as a technical road map and approach for achieving the targets for infrastructure utilization, energy efficiency, and cost efficiency and was to be incorporated into the agency’s fiscal year 2012 budget. In October 2010, OMB reported that all of the agencies had submitted an inventory and plan. In addition, in a series of memoranda, OMB described plans to monitor agencies’ consolidation activities on an ongoing basis. Starting in fiscal year 2011, OMB required agencies to provide an updated data center asset inventory at the end of every third quarter and an updated consolidation plan (including any missing elements) at the end of every fourth quarter. Further, starting in fiscal year 2012, OMB required agencies to provide a consolidation progress report at the end of every quarter. While OMB is primarily responsible for FDCCI, the agency designated two agency CIOs to be executive sponsors to lead the effort within the Federal CIO Council, the principal interagency forum to improve IT- related practices across the federal government. In addition, OMB identified two additional organizations to assist in managing and overseeing FDCCI: The GSA FDCCI Program Management Office is to support OMB in the planning, execution, management, and communications for FDCCI. The Data Center Consolidation Task Force is comprised of the data center consolidation program managers from each agency. According to its charter, the Task Force is critical to supporting collaboration across the FDCCI agencies, including identifying and disseminating key pieces of information, solutions, and processes that will help agencies in their consolidation efforts. “…a data center is…a closet, room, floor or building for the storage, management, and dissemination of data and information and computer systems and associated components, such as database, application, and storage systems and data stores [excluding facilities exclusively devoted to communications and network equipment (e.g., telephone exchanges and telecommunications rooms)]. A data center generally includes redundant or backup power supplies, redundant data communications connections, environmental controls…and special security devices housed in leased, owned, collocated, or stand-alone facilities.” Under the first definition, OMB identified 2,094 data centers in July 2010. Using the new definition from October 2011, OMB estimated that there were a total of 3,133 federal data centers in December 2011, was to consolidate approximately 40 percent for a savings of approximately $3 billion by the end of 2015. OMB, Implementation Guidance for the Federal Data Center Consolidation Initiative (Washington, D.C.: Mar. 19, 2012). The number changes as agencies identify new centers, but agencies are only required to provide updated inventories once a year, by the end of June. In March 2012, OMB launched the PortfolioStat initiative, which requires agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending and demonstrate how its IT investments align with the agency’s mission and business functions. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT portfolio management process, make decisions on eliminating duplication, and move to shared solutions in order to maximize the return on IT investments across the portfolio. To support this initiative, agencies were required to take several actions including designating a lead for PortfolioStat by April 9, 2012, holding their first PortfolioStat session by July 31, 2012, and submitting a final plan to consolidate commodity IT by August 31, 2012. In September 2012, the Federal CIO wrote in an e-mail to agencies that OMB was planning to integrate FDCCI with the PortfolioStat initiative to allow agencies to focus on an enterprisewide approach to address all commodity IT, including data centers, in an integrated, comprehensive plan and that agencies should continue to focus on optimizing those data centers that are essential to delivering taxpayer services, while continuing to close those that are duplicative. In addition, the e-mail directed agencies to delay submitting their October 1, 2012, submissions of updated consolidation plans until further guidance could be provided. However, agencies were still to report quarterly updates on their data center closures. Going forward, the Federal CIO wrote that OMB plans to require agencies to submit and publish updated consolidation plans that reflect these new points of emphasis and integrate data center consolidation with the enterprisewide plans to reduce commodity IT and decrease duplicative applications as part of overall portfolio management. More recently, in March 2013, OMB issued a memorandum documenting the integration of FDCCI with PortfolioStat. Among other things, the memorandum discusses OMB’s efforts to further the PortfolioStat initiative by incorporating several changes, such as consolidating previously collected IT-related plans, reports, and data submissions. The memorandum also establishes new agency reporting requirements and related time frames. Specifically, agencies are no longer required to submit the data center consolidation plans previously required under FDCCI. Rather, agencies are to submit information to OMB via three primary means—an information resources management strategic plan, an enterprise road map, and an integrated data collection channel. Agencies’ draft versions of their strategic plans and enterprise road maps are due to OMB in May 2013, as well as their first integrated data collections. The integrated data collections are to be updated quarterly beginning in August 2013 and the strategic plans and road maps are to be updated after Congress receives the President’s budget for fiscal year 2015. Agencies are still required to update their data center inventories by the end of June 2013 and report quarterly on consolidation progress. The memorandum and its implications on FDCCI are discussed in more detail later in this report. We have previously reported on OMB’s efforts to consolidate federal data centers. In March 2011, we identified data center consolidation as one of the 81 areas within the federal government with the opportunity to reduce potential duplication, overlap, and fragmentation. In this regard, we reported on the status of FDCCI and noted that data center consolidation made sense economically and was a way to achieve more efficient IT operations, but that challenges existed.facing challenges in ensuring the accuracy of their inventories and plans, providing upfront funding for the consolidation effort before any cost savings accrued, and overcoming cultural resistance to major organizational changes, among other things. For example, agencies reported In July 2011, we issued a report on the status of FDCCI and found that only 1 of the 24 agencies had submitted a complete inventory and no agency had submitted complete plans. agencies to document the steps they had taken, if any, to verify the inventory data. We concluded that until these inventories and plans were complete, agencies would not be able to implement their consolidation activities and realize expected cost savings. Moreover, without an understanding of the validity of agencies’ consolidation data, OMB could not be assured that agencies were providing a sound baseline for estimating consolidation savings and measuring progress against those goals. Accordingly, we made several recommendations to OMB, including that the Federal CIO require that agencies, when updating their data center inventory, state what actions were taken to verify the information in the inventory and to identify any associated limitations on the data, and to complete the missing elements in their inventories and consolidation plans. OMB generally agreed with our report and has since taken actions to address our recommendations. For example, in July 2011, OMB required agency CIOs to submit a letter that identified steps taken to verify their data center inventory information and attest to the completeness of their consolidation plan. In addition, in March 2012, OMB required that all agencies, by the end of the fourth quarter of every fiscal year, complete all elements missing from their consolidation plans. GAO-11-565. publicly posted its revised guidance. Notwithstanding these weaknesses, we found that 19 agencies reported anticipating about $2.4 billion in cost savings between 2011 and 2015. We also reported that none of five selected agencies had a master program schedule or cost-benefit analysis that was fully consistent with best practices. To assist agencies with their data center consolidation efforts, OMB had sponsored the development of a FDCCI total cost of ownership model that was intended to help agencies refine their estimated costs for consolidation; however, agencies were not required to use the cost model as part of their cost estimating efforts. Accordingly, we reiterated our prior recommendation that agencies complete missing plan and inventory elements and made new recommendations to OMB to publically post guidance updates on the FDCCI website and to require agencies to use its cost model. OMB generally agreed with our recommendations and has since taken steps to address them. More specifically, OMB posted its 2012 guidance for updating data center inventories and plans, as well as guidance for reporting consolidation progress, to the FDCCI public website. Further, the website has been updated to provide prior guidance documents and OMB memoranda. In addition, OMB’s 2012 consolidation plan guidance requires agencies to use the cost model as they develop their 2014 budget request. The 24 agencies have collectively made progress towards OMB’s data center consolidation goal to close 40 percent, or approximately 1,253 of the 3,133 data centers, by the end of 2015. To track their progress, OMB requires agencies to report quarterly on their completed and planned performance against that goal via an online portal. After being reviewed for data quality and security concerns, the GSA FDCCI Program Management Office makes the performance information available on the federal website dedicated to providing the public with access to datasets developed by federal agencies, http://data.gov. We have previously reported that oversight and governance of major IT initiatives help to ensure that the initiatives meet their objectives and performance goals. When an initiative is governed by multiple entities, the roles and responsibilities of those entities should be clearly defined and documented, including the responsibilities for coordination among those entities. We have further reported, and OMB requires,executive-level body be responsible for overseeing major IT initiatives. Among other things, we have reported that this body should have documented policies and procedures for management oversight of the initiative, regularly track progress against established performance goals, and take corrective actions as needed. As the FDCCI executive-level body, OMB has put in place a governance hierarchy that is responsible for FDCCI oversight activities; while most of these activities are being performed, several weaknesses exist. The weaknesses in oversight are due, in part, to OMB not ensuring that assigned responsibilities are being executed. Improved oversight could better position OMB to assess progress against its cost savings goal and minimize agencies’ risk of not realizing anticipated cost savings. OMB’s March 2013 memorandum made significant changes to FDCCI; however, the impact on several key FDCCI oversight responsibilities is not addressed. Oversight and governance of FDCCI is the responsibility of several organizations—the Task Force, the GSA FDCCI Program Management Office, and OMB. Roles and responsibilities for these organizations are documented in the Task Force charter and OMB memoranda, while others are described in OMB’s January 2013 quarterly report to Congress or have been communicated by agency officials. For example, through its charter, the Task Force is given responsibility for assisting agencies with the development of their consolidation plans and assisting OMB with launching an electronic public dashboard for tracking the progress of agencies’ consolidation efforts. OMB memoranda assign the GSA FDCCI Program Management Office the responsibility of collecting agencies’ data center inventories in June and consolidation plans in September of each year. OMB memoranda also establish OMB’s responsibility for providing executive-level oversight and approving agencies’ data center consolidation plans. Other responsibilities were not documented in official policy, but were shared with us by agency staff and confirmed in OMB’s January 2013 quarterly report to Congress. For example, agency staff described additional GSA FDCCI Program Management Office responsibilities, including analyzing agencies’ inventories and plans and providing OMB with quarterly and ad-hoc reports on consolidation progress. In addition, the Task Force Chairperson stated that the Task Force has been responsible for developing and administering the peer review process for agencies’ consolidation plans. See table 1 for a listing of the FDCCI oversight and governance entities and their key responsibilities. The Task Force has been assigned and has executed a wide range of FDCCI responsibilities, many of which are related to assisting agencies in their consolidation efforts. For example, the Task Force holds monthly meetings to, among other things, communicate and coordinate consolidation best practices and to identify policy and implementation issues that could negatively impact the ability of agencies to meet their goals. Further, the Task Force has assisted agencies with the development of their consolidation plans by discussing lessons learned during its monthly meetings and disseminating new OMB guidance. In addition, the Task Force worked with the GSA FDCCI Program Management Office to develop a standard total cost of ownership model intended to help agencies in the development of cost savings figures in their consolidation plans, among other things, and has also assisted with its ongoing management by establishing a change control process. Lastly, the Task Force worked with GSA and OMB to launch an electronic governmentwide data center marketplace that is intended to match agencies that have extra data center capacity with agencies with increasing demand, thereby improving the utilization of existing facilities. However, the Task Force has not provided oversight of the agency consolidation peer review process. According to officials, the purpose of the peer review process was for agencies to get feedback on their consolidation plans and potential improvement suggestions from a partner agency with a data center environment of similar size and complexity. While the Task Force documented the agency pairings for 2011 and 2012 reviews, the Task Force Chairperson stated that they did not perform checks to ensure that all agencies had exchanged plans. As a result, the GSA FDCCI Program Manager acknowledged that there may have been cases where agencies did not exchange their plans, but was not completely sure because this had not been tracked. In addition, the Task Force did not provide agencies with guidance for executing their peer reviews, including information regarding the specific aspects of agency plans to be reviewed and the process for providing feedback. As a result, the peer review process did not ensure that significant weaknesses in agencies’ plans were identified. As previously mentioned, in July 2012, we reported that all of the agencies’ plans were incomplete except for one. In addition, we noted that three agencies had submitted their June 2011 inventory updates, a required component of consolidation documentation, in an incorrect format—an outdated template. At the time, agency officials told us that they did not realize they were relying on an outdated inventory template, an oversight that might have been identified through a robust peer review process. Several actions have been taken to address these communication issues. Specifically, OMB has posted updated inventory and plan guidance on the FDCCI website and committed to posting future guidance in a similar manner, and officials from GSA’s FDCCI Program Management Office stated that inventory data are now being collected via an electronic portal to help prevent format issues. However, well-documented peer review guidance and improved Task Force oversight of this process can help ensure that similar situations are avoided in the future. According to OMB staff from the Office of E-Government and Information Technology, guidance for the peer review process has not been documented because the expectation has been that agencies, as part of their review of another agency’s plan, would determine the most important areas of the plan on which to provide feedback. Until OMB ensures that the peer review process is documented, agencies will lack the necessary guidelines for carrying out their peer reviews and the likelihood of any errors or missing elements in agencies’ inventories and consolidation plans not being detected is increased. In its supporting role, the GSA FDCCI Program Management Office has been assigned and has performed a broad set of responsibilities, including many that are related to assisting OMB in managing FDCCI. For example, GSA has collected responses to OMB-mandated document deliveries, including agencies’ consolidation inventories and plans, on an annual basis. Further, GSA has collected data related to FDCCI data center closure updates, disseminated the information publically on the consolidation progress dashboard on http://data.gov, and provided ad hoc and quarterly updates to OMB regarding these data. GSA has also maintained and updated other FDCCI-related online portals, such as http://CIO.gov. In addition, GSA worked with the Task Force to develop the total cost of ownership model, assists with the ongoing management of the model, and provides ongoing technical assistance to agencies regarding the model, as well as inventory and plan requirements. Lastly, GSA worked with the Task Force and OMB to launch an electronic governmentwide marketplace for data center availability. However, the GSA FDCCI Program Management Office has not executed its responsibilities related to analyzing agencies’ inventories and plans and reviewing these documents for errors. In July 2012, we reported on agencies’ progress toward completing their inventories and plans and found that only three agencies had submitted a complete inventory and only one agency had submitted a complete plan, and that most agencies did not fully report cost savings information and eight agencies did not include any cost savings information. In addition, we noted that three agencies had submitted their inventory using an outdated template. According to OMB’s January 2013 quarterly report to Congress, GSA is responsible for analyzing agencies’ inventories and plans, but this report does not provide any information on specific analysis requirements. In contrast, officials from GSA’s FDCCI Program Management Office stated that their office does not do any significant validation of agencies’ consolidation plans, but typically checks to see if any major sections (such as cost savings) are missing. Because OMB has not required agencies to submit updated plans since September 2011, the lack of cost savings information is particularly important because, as previously noted, initiativewide cost savings have not been determined—a shortcoming that could potentially be addressed if agencies had submitted complete plans that addressed cost savings realized, as required. A mechanism to help ensure that GSA’s review requirements are fully executed could provide OMB with reasonable assurance that any gaps between agency plans and OMB’s requirements are identified. After establishing FDCCI in 2010, OMB has taken several actions to manage the initiative and to facilitate and oversee agencies’ consolidation progress. As the FDCCI executive-level body, OMB is responsible for managing FDCCI and ensuring that roles and responsibilities are fully documented and carried out as intended. For example, OMB issued FDCCI policies and guidance in a series of memoranda that, among other things, required agencies to provide an updated data center asset inventory at the end of every third quarter and an updated consolidation plan at the end of every fourth quarter. In addition, OMB has also put in place mechanisms to track and report on progress against one of its key performance goals, the consolidation of approximately 40 percent of the total data centers by the end of 2015. In this regard, OMB launched a publically available electronic dashboard to track and report on agencies’ consolidation progress and, starting in fiscal year 2012, required agencies to report quarterly via an online portal on their completed and planned data center consolidation efforts. Lastly, OMB worked with the Task Force and GSA to launch an electronic governmentwide marketplace for data center availability. However, although OMB is the approval authority of agencies’ consolidation plans, it has not approved agencies’ submissions on the basis of their completeness. In an October 2010 memorandum, OMB stated that its approval of agencies’ consolidation plans was in progress and would be completed by December 2010. However, OMB did not issue a subsequent memorandum indicating that it had approved agencies’ plans, or an updated time frame for completing its review. This is important because, in July 2011 and July 2012, we reported that agencies’ consolidation plans had significant weaknesses and that nearly all were incomplete. Staff from OMB’s Office of E-Government and Information Technology have since stated that OMB is not responsible for ensuring that agencies’ plans are complete, rather that the agencies are solely responsible for this effort. While these staff have also acknowledged that all FDCCI roles and responsibilities have not been formally documented, OMB’s October 2010 FDCCI memorandum documents OMB’s responsibility for approving agencies’ consolidation plans. Further, in recognizing OMB’s role in providing executive-level oversight of the initiative, we have also previously reported that OMB is responsible for ensuring that agencies submit complete plans. Until OMB reviews and approves agencies’ consolidation plans on the basis of their completeness, OMB and the FDCCI agencies may remain unaware of any gaps between these plans and OMB’s requirements, furthering the risk that agencies will move ahead with consolidation efforts that do not fully support OMB’s anticipated cost savings goal. Additionally, OMB has not reported on agencies’ progress against its key performance goal of achieving $3 billion in cost savings by the end of 2015. Although the 2012 Consolidated Appropriations Act included a provision directing OMB to submit quarterly progress reports to the Senate and House Appropriations Committees that identify savings achieved through governmentwide IT reform efforts,reported on cost savings realized for FDCCI. Instead, the agency’s quarterly reports have only described planned FDCCI-related savings and stated that future reports will identify savings realized. As of the January 2013 report, no such savings have been reported. As previously mentioned, OMB has not yet started to track agencies’ progress against its cost savings goal because the agency is working to identify a consistent and repeatable method for tracking cost savings. OMB staff stated that they will begin reporting on cost savings after they determine the appropriate method for tracking progress against this goal, but did not know when this would occur. Until OMB fulfills its responsibility to track and report on consolidation cost savings, the agency cannot begin to measure progress against a key FDCCI performance goal and stakeholders in the federal consolidation effort may not receive information critical for monitoring the progress of a key government IT reform effort. OMB’s March 2013 memorandum integrating FDCCI with the PortfolioStat initiative documents several oversight responsibilities related to data center consolidation under the new combined initiative. As previously mentioned, the memorandum describes OMB’s responsibilities for collecting agencies’ information resources management strategic plans, enterprise road maps, and integrated data collections, as well as the Task Force’s responsibility for developing data center performance metrics for energy, facility, and labor, among other things. The memorandum also states that GSA is still responsible for collecting agencies’ data center inventories in June 2013. However, several other important oversight responsibilities related to data center consolidation are not addressed. For example, with the elimination of the requirement to submit separate data center consolidation plans under the new combined initiative, the memorandum does not discuss whether either the Task Force or the GSA Program Management Office will continue to be used in their same oversight roles for review of agencies’ documentation. In addition, while the memorandum discusses OMB’s responsibility for reviewing agencies’ draft strategic plans, it does not discuss the responsibility for approving these plans. In the absence of defined oversight assignments and responsibilities, it cannot be determined how OMB will have assurance that agencies’ plans meet the revised program requirements and, moving forward, whether these plans support the goals of the combined initiative. After more than 3 years into FDCCI, agencies have made progress in their efforts to close data centers. However, many key aspects of the integration of FDCCI and PortfolioStat, including new data center consolidation and cost savings goals, have not yet been defined. Further compounding this lack of clarity, total cost savings to date from data center consolidation efforts have not been determined, creating uncertainty as to whether OMB will be able to meet its original cost savings goal of $3 billion by the end of 2015. Additionally, even though best practices promote the importance of establishing comprehensive performance measures, OMB is not reporting on a key component of consolidation progress, namely the size of facilities that are being closed, and current agency consolidation progress indicates that additional time will be needed beyond the original 2015 target in order to realize anticipated cost savings. In the absence of tracking and reporting on key performance measures, notably cost savings, and additional time for agencies to achieve planned savings, OMB will be challenged in ensuring that the initiative, under this new direction, is meeting its established objectives. Recognizing the importance of effective oversight of major IT initiatives, OMB directed that three oversight organizations—the Task Force, the GSA FDCCI Program Management Office, and OMB—be responsible for federal data center consolidation oversight activities. In establishing this oversight structure, these organizations have established and fulfilled responsibilities designed to better ensure that federal data center consolidation meets its planned goals, including facilitating collaboration among agencies and developing tools to assist agencies in their consolidation efforts. However, other key oversight activities have not been performed. Most notably, the lack of formal guidance for consolidation plan peer reviews and approval increases the risk that missing elements will continue to go undetected and that agencies’ efforts will not fully support OMB’s goals. Further, while OMB has put in place initiatives to track consolidation progress, consolidation inventories and plans are not being reviewed for errors and cost savings are not being tracked or reported. The collective importance of these activities to federal data center consolidation success reinforces the need for oversight responsibilities to be fulfilled in accordance with established requirements. To better ensure that FDCCI achieves expected cost savings and to improve executive-level oversight of the initiative, we are making three recommendations to OMB. Specifically, we are recommending that the Director of OMB direct the Federal CIO to track and annually report on key data center consolidation performance measures, such as the size of data centers being closed and cost savings to date; extend the time frame for achieving cost savings related to data center consolidation beyond the current 2015 horizon, to allow time to meet the initiative’s planned cost savings goal; and establish a mechanism to ensure that the established responsibilities of designated data center consolidation oversight organizations are fully executed, including responsibility for the documentation and oversight of the peer review process, the review of agencies’ updated consolidation inventories and plans, and approval of updated consolidation plans. We received comments on a draft of our report from OMB and GSA. Specifically, in written comments, the Federal CIO stated that the agency concurred with the first and third recommendation. Regarding the second recommendation, OMB neither agreed nor disagreed. However, the Federal CIO stated that, as the FDCCI and PortfolioStat initiatives proceed and continue to generate savings, OMB will consider whether updates to the current time frame are appropriate. OMB’s written comments are provided in appendix II. GSA provided technical comments in which it stated that it disagreed with our finding that the agency has not executed its responsibilities related to analyzing agencies’ inventories and plans. Specifically, GSA stated that its FDCCI Program Management Office is not set up to perform audit or in-depth review activities, that GSA’s review requirements are not specifically defined by OMB, and that accountability for the accuracy and completeness of inventories and plans lies with agency CIOs. We agree that agencies have a responsibility for ensuring the completeness of their documentation, and we note in the report that agency CIOs are required to attest to the completeness of their consolidation plans. However, as also mentioned in this report, OMB documentation specifically states that the GSA FDCCI Program Management Office is responsible for analyzing agencies’ inventories and plans. In addition, the report includes information from GSA officials about their efforts to check these documents for missing sections. Based on these facts, we stand by our assessment that GSA has not fully executed its oversight responsibilities. Our draft report provided to OMB for comment included a recommendation that OMB provide the FDCCI agencies with guidance on how the initiative was to be integrated with PortfolioStat, including direction on reporting requirements and the measurement of key performance indicators. OMB’s March 2013 memorandum provided guidance on how FDCCI will be integrated with PortfolioStat. Among other things, the memorandum included new reporting requirements and provided initial direction on the measurement of key performance indicators. As a result of OMB’s action, we have removed this recommendation from our final report. We also modified the language of our other recommendations as appropriate, taking into consideration OMB’s memorandum. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the Administrator of the General Services Administration, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) evaluate agencies’ reported progress against the Office of Management and Budget’s (OMB) planned consolidation and cost savings goals and (2) assess the extent to which the oversight organizations put in place by OMB for the Federal Data Center Consolidation Initiative (FDCCI) are adequately performing oversight of agencies’ efforts to meet these goals. To evaluate agencies’ reported progress, we compared the 24 departments and agencies’ data center facility reductions as reported on http://data.gov with OMB’s original consolidation goals. To assess the reliability of these data, we reviewed related documentation, such as OMB’s reports to Congress on the status of information technology reform efforts, checked for missing data or obvious errors, and interviewed OMB staff from the Office of E-Government and Information Technology regarding actions taken to verify the data. We determined that the data were sufficiently reliable to report on agencies’ progress towards OMB’s consolidation goals. In addition, we interviewed OMB staff about their efforts to track consolidation cost savings. We also analyzed the agencies’ 2011 consolidation plans to extract estimated cost savings information. To assess the reliability of the data agencies provided in their data center consolidation plans, we relied on actions performed as part of our previous work to assess the reliability of the data, which included reviewing the letters agencies were required to submit attesting to the completeness and reliability of their plans, interviewing agency officials about actions taken to verify their data, and reviewing the past evaluations of agency plans. In the previous audit of FDCCI, we found that the data were sufficiently reliable for reporting on the completeness of agencies’ plans but, in our assessment of the plans, noted that most agencies did not fully report on their cost savings information. For this review, we reviewed the results of our previous data reliability assessment in the context of current objectives, and interviewed agency officials about any additional actions taken to verify the data. We concluded that the data were sufficiently reliable for our purposes, which was to report on agencies’ estimated cost savings information, but that limitations existed related to the completeness of agencies’ cost savings information. As such, we identify the limitations of these data in the finding sections of this report. To assess the extent to which the oversight organizations put in place by OMB for FDCCI are adequately performing oversight of agency consolidation efforts, we analyzed OMB memoranda,Center Consolidation Task Force Charter, and other related documentation to determine the roles and responsibilities of key oversight organizations, including OMB, the General Services Administration FDCCI Program Management Office, and the Data Center Consolidation Task Force. We then compared supporting documentation provided by these organizations against their documented roles and responsibilities to determine the extent to which these organizations were overseeing agencies’ consolidation efforts in a manner consistent with their assigned responsibilities. In addition, we interviewed relevant OMB, General Services Administration, and Data Center Consolidation Task Force officials to determine the extent to which their oversight roles and responsibilities were being executed, as well as the extent to which they were providing oversight of agencies’ efforts to meet established consolidation and cost savings goals. We conducted this performance audit from October 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making contributions to this report included Dave Hinchman (Assistant Director), Justin Booth, Nancy Glover, and Jonathan Ticehurst.
|
In 2010, as focal point for information technology management across the government, OMBs Federal Chief Information Officer launched the Federal Data Center Consolidation Initiativean effort to consolidate the growing number of federal data centers. In July 2011 and July 2012, GAO evaluated 24 agencies progress and reported that nearly all of the agencies had not completed a data center inventory or consolidation plan and recommended that they do so. As requested, GAO reviewed federal agencies continuing efforts to consolidate their data centers. This report (1) evaluates agencies' reported progress against OMBs planned consolidation and cost savings goals and (2) assesses the extent to which the oversight organizations put in place by OMB for the Federal Data Center Consolidation Initiative are adequately performing oversight of agencies' efforts to meet these goals. GAO assessed agencies progress against OMBs goals, analyzed the execution of oversight roles and responsibilities, and interviewed OMB, GSA, and Data Center Consolidation Task Force officials about their efforts to oversee agencies consolidation efforts. The 24 agencies participating in the Federal Data Center Consolidation Initiative (FDCCI) made progress towards the Office of Management and Budget's (OMB) goal to close 40 percent, or 1,253 of the 3,133 total federal data centers, by the end of 2015, but OMB has not measured agencies' progress against its other goal of $3 billion in cost savings by the end of 2015. Agencies closed 420 data centers by the end of December 2012, and have plans to close an additional 548 to reach 968 by December 2015--285 closures short of OMB's goal. OMB has not determined agencies' progress against its cost savings goal because, according to OMB staff, the agency has not determined a consistent and repeatable method for tracking cost savings. This lack of information makes it uncertain whether the $3 billion in savings is achievable by the end of 2015. Until OMB begins tracking and reporting on performance measures such as cost savings, it will be limited in its ability to oversee agencies' progress against key initiative goals. Additionally, extending the horizon for realizing planned cost savings could provide OMB and data center consolidation stakeholders with input and information on the benefits of consolidation beyond OMB's initial goal. Pursuant to OMB direction, three organizations--the Data Center Consolidation Task Force, the General Services Administration (GSA) Program Management Office, and OMB--are responsible for federal data center consolidation oversight activities; while most activities are being performed, several weaknesses exist. Specifically, While the Data Center Consolidation Task Force has established several initiatives to assist agencies in their consolidation efforts, such as holding monthly meetings to facilitate communication among agencies, it has not adequately overseen its peer review process for improving the quality of agencies' consolidation plans. For example, the Task Force did not provide agencies with guidance for conducting peer reviews and did not provide oversight to ensure that all agencies exchanged plans. The GSA Program Management Office has collected agencies' quarterly data center closure updates and made the information publically available on an electronic dashboard for tracking consolidation progress, but it has not fully performed other oversight activities, such as conducting analyses of agencies' inventories and plans. OMB has implemented several initiatives to track agencies' consolidation progress, such as establishing requirements for agencies to update their plans and inventories yearly and to report quarterly on their consolidation progress. However, the agency has not approved the plans on the basis of their completeness or reported on progress against its goal of $3 billion in cost savings. The weaknesses in oversight of the data center consolidation initiative are due, in part, to OMB not ensuring that assigned responsibilities are being executed. Improved oversight could better position OMB to assess progress against its cost savings goal and minimize agencies risk of not realizing anticipated cost savings. GAO is recommending that OMBs Federal Chief Information Officer track and report on key performance measures, extend the time frame for achieving planned cost savings, and improve the execution of important oversight responsibilities. OMB agreed with two of GAOs recommendations and plans to evaluate the remaining recommendation related to extending the time frame.
|
In February 1997, we issued our third series of reports on the status of high-risk areas across the government. One report in the series discussed the four long-standing high-risk areas at IRS: (1) tax systems modernization—IRS’ development of the business and management strategies, software acquisition and development capabilities, and technical infrastructure and systems architecture needed to modernize its systems and processes; (2) financial management—IRS’ efforts to properly account for its tax revenues, obligations, and disbursements; (3) accounts receivable—IRS’ initiatives to better understand the composition of its tax debt inventory and to devise effective collection strategies and reliable programs to prevent future delinquencies; and (4) filing fraud—IRS’ efforts to gather sufficient information to determine the effectiveness of its attempts to deter the filing of fraudulent returns. Our 1997 high-risk report series also designated five new high-risk areas, two of which have government-wide implications and directly affect IRS’ operations. One area is information security—IRS’ initiatives to better protect the confidentiality and accuracy of taxpayer data from unauthorized access and manipulation. The other area is the year 2000 problem—IRS’ plans to protect itself from the operational and financial impacts that could affect tax processing and revenue collection systems if its computer systems cannot accommodate the change of date to the year 2000. Today, we will briefly discuss the problems IRS faces in these six high-risk areas, the progress IRS has made since our last series of high-risk reports in 1995, and the measures IRS must take to resolve the problems in its high-risk areas. This testimony is based on our prior reports and recent information obtained from IRS. For years we have chronicled IRS’ struggle to modernize and manage its operations, especially in the high-risk areas, and have made scores of recommendations to improve IRS’ systems, processes, and procedures. It is clear that in order to achieve its stated goals of reducing the volume of paper tax returns, providing better customer service, and improving compliance with the nation’s tax laws, IRS must successfully modernize its systems and operations. To accomplish this modernization, however, IRS needs to develop comprehensive business strategies to ensure that its new and revised processes drive systems development and acquisition. Solving the problems in the high-risk areas is not an insurmountable task, but it requires sustained management commitment, accurate information systems, and reliable performance measures to track IRS’ progress and provide the data necessary to make informed management decisions. Over the last decade, IRS has been attempting to overhaul its timeworn, paper-intensive approach to tax return processing. At stake is the over $3 billion that IRS has spent or obligated on this modernization since 1986, as well as any additional funds that IRS plans to spend on the modernization. In July 1995, we reported that IRS (1) did not have a comprehensive business strategy to cost-effectively reduce paper tax return filings; (2) had not yet fully developed and put in place the requisite management, software development, and technical infrastructure necessary to successfully implement its ambitious, world-class modernization; and (3) lacked an overall systems architecture, or blueprint, to guide the modernization’s development and evolution. At that time, we made over a dozen recommendations to the IRS Commissioner to address these weaknesses. Pursuant to subsequent congressional direction, we assessed IRS’ actions to correct its management and technical weaknesses. We reported in June and September 1996 that IRS had initiated many activities to improve its modernization efforts but had not yet fully implemented any of our recommendations. We also suggested to Congress that it consider limiting modernization funding exclusively to cost-effective efforts that (1) support ongoing operations and maintenance; (2) correct IRS’ pervasive management and technical weaknesses; (3) are small, represent low technical risk, and can be delivered quickly; and (4) involve deploying already developed and fully tested systems that have proven business value and are not premature given the lack of a completed architecture. IRS has taken steps to address our recommendations and respond to congressional direction. For example, IRS hired a new Chief Information Officer. It also created an investment review board to select, control, and evaluate its information technology investments. Thus far, the board has reevaluated and terminated several major modernization development projects that were not found to be cost-effective. In addition, IRS provided a report to Congress in November 1996 that set forth IRS’ strategic plan and its schedule for shifting modernization development and deployment to contractors. IRS is also finalizing a comprehensive strategy to maximize electronic filing that is currently scheduled for completion in May 1997. It is also updating its system development life cycle methodology and is working across various IRS organizations to define disciplined processes for software requirements management, quality assurance, configuration management, and project planning and tracking. Additionally, IRS is developing a systems architecture and project sequencing plan for the modernization and intends to provide this to Congress by May 15, 1997. While we recognize IRS’ actions, we remain concerned because much remains to be done to fully implement essential improvements. Increasing the use of contractors, for example, will not automatically increase the likelihood of successful modernization because IRS does not have the technical capability needed to manage all of its current contractors. To be successful, IRS must also continue to make a concerted, sustained effort to fully implement our recommendations and respond effectively to the requirements outlined by Congress. It will take both management commitment and technical discipline for IRS to accomplish these tasks. Our audits of IRS’ financial statements have outlined the substantial improvements needed in IRS’ accounting and reporting in order to comply fully with the requirements of the Chief Financial Officers Act of 1990 (CFO Act). The audits for fiscal years 1992 through 1995 have described IRS’ difficulties in (1) properly accounting for its tax revenues, in total and by reported type of tax; (2) reliably determining the amount of accounts receivable owed for unpaid taxes; (3) regularly reconciling its Fund Balance With Treasury accounts; and (4) either routinely providing support for receipt of the goods and services it purchases or, where supported, accurately recording the purchased item in the proper period. IRS has made progress in addressing problems in these areas and has developed an action plan, with specific timetables and deliverables, to address the issues our financial statement audits have identified. In the administrative accounting area, for example, IRS reported that it has identified substantially all of the reconciling items for its Fund Balance With Treasury accounts, except for certain amounts IRS has deemed not to be cost-beneficial to research further. It also has successfully transferred its payroll processing to the Department of Agriculture’s National Finance Center and has begun designing both a short-term and a long-term strategy to fix the problems that contribute to its nonpayroll expenses being unsupported or reported in the wrong period. In the revenue accounting area, IRS’ problems are especially affected and complicated by automated data processing systems that were implemented many years ago and thus not designed to support the new financial reporting requirements imposed by the CFO Act. Therefore, IRS has designed an interim solution to capture the detailed support for revenue and accounts receivable until longer-term solutions can be identified and implemented. Some of the longer-term actions include (1) implementing software, hardware, and procedural changes needed to create reliable subsidiary accounts receivable and revenue records that are fully integrated with the general ledger; and (2) implementing software changes that allow the detailed taxes reported to be maintained separately from the results of compliance efforts that would not be valid financial reporting transactions in the masterfile, other related revenue accounting feeder systems, and the general ledger. Over the past 4 years, we have made numerous recommendations to improve IRS’ financial management systems and reporting, and IRS has been working to position itself to have more reliable financial statements for fiscal year 1997 and thereafter. To accomplish this, especially in accounting for revenue and the related accounts receivables, IRS will need to institute long-term solutions involving reprogramming software for IRS’ antiquated systems and developing new systems as required. Follow-through to complete necessary corrective measures is essential if IRS is to ensure that its corrective actions are carried out and effectively solve its financial management problems. Solving these problems is fundamental to providing reliable financial information and ensuring taxpayers that the government can properly account for their federal tax dollars. The accuracy of IRS’ financial statements is vital to both IRS and Congress for (1) ensuring adequate accountability for IRS programs; (2) assessing the impact of tax policies; and (3) measuring IRS’ performance and cost effectiveness in carrying out its numerous tax enforcement, customer service, and collection activities. IRS routinely collects over a trillion dollars annually in taxes, but many taxpayers are unable or unwilling to pay their taxes when due. As a result, IRS estimates that its accounts receivable amounts to tens of billions of dollars. Unfortunately, IRS’ ability to effectively address its accounts receivable problems is seriously hampered by its outdated equipment and processes, incomplete information needed to better target collection efforts, and the absence of a comprehensive strategy and detailed plan to address the systemic nature of the underlying problems. IRS’ collection efforts have also been hampered by the age of the delinquent tax accounts. Because of the outdated equipment and processes used to match tax returns and related information documents, it can take IRS several years to identify potential delinquencies and then initiate collection actions. In addition, according to IRS, the 10-year statutory collection period generally precludes it from writing off uncollectible receivables until that period has expired. As a result, the receivables inventory includes many relatively old accounts that will never be collected because the taxpayers are deceased or the companies defunct. This is not to say, however, that IRS has not been trying to overcome its deficiencies. In the last 2 years, IRS has undertaken initiatives to correct errors in its masterfile records of tax receivables, develop profiles of delinquent taxpayers, and study the effectiveness of various collection techniques. It has also streamlined its collection process, placed additional emphasis on contacting repeat delinquents, made its collection notices more readable, and targeted compliance-generated delinquencies for earlier intervention. IRS reported that, as a result of taking these actions, its collection employees took in more money than they classified as “currently not collectible” and that the amount of money collected immediately following the revision of its collection notices increased by almost 25 percent over a comparable period in 1995. In addition, IRS reported collecting more in delinquent taxes in fiscal year 1996 than it ever has, almost $30 billion. Despite these positive results, IRS needs to continue the development of information databases and performance measures to afford its managers the data needed to determine which actions or improvements generate the desired changes in IRS’ programs and operations. And, this should not be looked upon as a short-term commitment. It will still take a number of years to identify the root causes of delinquencies and to develop, test, and implement courses of action to deal with the causes. Furthermore, once the analyses and planning are completed, it will still be some time before full results of the new initiatives are realized. Therefore, IRS must take deliberate action to ensure that its problem-solving efforts are on the right track. Specifically, it needs to implement a comprehensive strategy that involves all aspects of IRS’ operations and that sets priorities; accelerates the modernization of outdated equipment and processes; and establishes realistic goals, specific timetables, and a system to measure progress. When we first identified filing fraud as a high-risk area in February 1995, the amount of filing fraud being detected by IRS was on an upward spiral. Since then, IRS has introduced new controls and expanded existing controls in an attempt to reduce its exposure to filing fraud. Those controls are directed toward either (1) preventing the filing of fraudulent returns or (2) identifying questionable returns after they have been filed. To deter the filing of fraudulent returns, IRS (1) expanded the number of up-front filters in the electronic filing system designed to screen electronic submissions for selected problems in order to prevent returns with those problems from being filed electronically and (2) strengthened the process for checking the suitability of persons applying to participate in the electronic filing program as return preparers or transmitters by requiring fingerprint and credit checks. To better identify fraudulent returns once they have been filed, IRS placed an increased emphasis in 1995 on validating social security numbers (SSN) on filed paper returns and delayed any related refunds to allow time to do those validations and to check for possible fraud. IRS also revised the computerized formulas it used to score all tax returns as to their fraud potential and upgraded the research capabilities of its fraud detection staff. IRS’ efforts produced some positive results. For example, the number of SSN problems identified by the electronic filing filters quadrupled between 1994 and 1995, and about 350 persons who applied to participate in the electronic filing program for 1995 were rejected because they failed the new fingerprint and credit checks. IRS’ efforts to validate SSNs on paper returns produced over $800 million in reduced refunds or additional taxes. Unfortunately, IRS identified many more SSN problems than it was able to deal with and released about 2 million refunds without resolving the problems. IRS was less successful in identifying fraudulent returns, identifying over 65 percent fewer fraudulent returns in 1996 than during a comparable period in 1995. IRS believes this decrease is attributable to a 31-percent reduction in its fraud detection staff and the resulting underutilization of its Electronic Fraud Detection System, which enhances the identification of fraudulent returns and lessens the probability of improperly deleting accurate refunds. However, IRS does not have the information it needs to verify that the decline was the result of staff reductions or to determine the extent to which the downward trend may have been affected by changes in the program’s operating and reporting procedures or by a general decline in the incidence of fraud. Given the decrease in fraud detection staff, it is critically important for IRS to (1) optimize the electronic controls that are intended to prevent the filing of fraudulent returns and (2) maximize the effectiveness of available staff. Modernization is the key to achieving these objectives, and electronic filing is the cornerstone of that modernization. One solution, then, is to increase the percentage of returns filed electronically. To achieve this goal, IRS must first identify those groups of taxpayers who offer the greatest opportunity to reduce IRS’ paper-processing workload and operating costs if they were to file electronically. IRS must then develop strategies that focus its resources on eliminating or lessening impediments that inhibit those groups from participating in the program. Malicious attacks on computer systems are an increasing threat to our national welfare. The federal government now relies heavily on interconnected systems to control critical functions which, if compromised, place billions of dollars worth of assets at risk of loss and vast amounts of sensitive data at risk of unauthorized disclosure. Increasing reliance on networked systems and electronic records has elevated our concerns about the possibility of serious disruption to critical federal operations. As a result of our recent work at IRS, we believe that the vulnerabilities of IRS’ computer systems may affect the confidentiality and accuracy of taxpayer data and may allow unauthorized access, modification, or destruction of taxpayer information. The overriding problem at IRS is that information security issues are addressed on a reactive basis. IRS does not have a proactive, independent information security group that systematically reviews the adequacy and consistency of security over IRS’ computer operations. In addition, computer security management has not completed a formal risk assessment of its systems to determine system sensitivity and vulnerability. As a result, IRS cannot effectively prevent or detect unauthorized browsing of taxpayer information and cannot ensure that taxpayer data is not being improperly manipulated for personal gain. IRS needs to address its information security weaknesses on a continuing basis. More specifically, IRS needs to impress upon its senior managers the need to conduct regular systematic security reviews and risk assessments of IRS’ computer systems and operations. The weaknesses identified by these reviews and assessments then need to be corrected expeditiously by personnel who have the technical expertise to effectively implement, manage, and monitor the necessary security controls and measures. For the past several decades, computer systems have used two digits to represent the year, such as “97” for 1997, in order to conserve electronic data storage and reduce operating costs. In this format, however, the year 2000 is indistinguishable from the year 1900 because both are represented as “00.” As a result, if not modified, computer systems and applications that use dates or perform date- or time-sensitive calculations may generate incorrect results beyond 1999. For IRS, such a disruption of functions and services could jeopardize all of its tax processing systems and administration. It could effectively halt the processing of tax return and return-related information, the maintenance of taxpayer account information, the assessment and collection of taxes, the recording of obligations and expenditures, and the disbursement of refunds. At the very least, IRS’ core business functions and mission-critical processes are at risk of failure, as is numerous other administrative and management processes. To avoid the crippling effects of a multitude of computer systems simultaneously producing inaccurate and unreliable information, IRS must assign management and oversight responsibility within its senior executive corps, define the potential impact of such a systems failure, and develop appropriate renovation strategies and contingency plans for its critical systems. Modifying IRS’ critical computer systems is a massive undertaking whose success or failure will, in large part, be determined by the quality of IRS’ executive leadership and program management. For years, IRS has struggled to collect the nation’s tax revenue using outdated processes and technology. The result has often been inefficient and ineffective programs and operations that are vulnerable to waste, fraud, abuse, and mismanagement. Of particular concern to us have been IRS’ efforts to modernize its tax systems, manage its administrative and revenue accounting systems, identify and collect taxes owed the government, detect and prevent the filing of fraudulent tax returns, protect the confidentiality of taxpayer information, and prevent the future disruption of tax services due to computer malfunctions. These areas of concern share common characteristics that IRS must address in the very near future. At a minimum, IRS needs an implementation strategy that includes both performing cost-benefit analyses and developing reasonable estimates of the extent, time frames, and resources required to correct its high-risk vulnerabilities. IRS also needs to (1) better define, prioritize, implement, and manage new information systems; (2) ensure that its administrative and revenue accounting systems fully comply with government accounting standards; (3) design and implement both administrative and electronic controls to protect taxpayer data from unauthorized access; and (4) develop performance measures that will allow its managers, Congress, and us to track its progress. And, above all, IRS management needs to sustain an agencywide commitment to solving the agency’s high-risk problems. Madam Chairman, this concludes my prepared statement. We will be glad to answer any questions that you or the Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the Internal Revenue Service's (IRS) efforts to improve the efficiency and effectiveness of its program areas that GAO has designated as high risk because of their vulnerability to waste, fraud, abuse, and mismanagement. GAO noted that: (1) for years GAO has chronicled IRS' struggle to modernize and manage its operations, especially in the high-risk areas, and has made scores of recommendations to improve IRS' systems, processes, and procedures; (2) it is clear that in order to achieve its stated goals of reducing the volume of paper tax returns, providing better customer service, and improving compliance with the nation's tax laws, IRS must successfully modernize its systems and operations; (3) to accomplish this modernization, however, IRS needs to develop comprehensive business strategies to ensure that its new and revised processes drive systems development and acquisition; (4) solving the problems in the high-risk areas is not an insurmountable task, but it requires sustained management commitment, accurate information systems, and reliable performance measures to track IRS' progress and provide the data necessary to make informed management decisions; and (5) at a minimum, IRS needs an implementation strategy that includes both performing cost-benefit analyses and developing reasonable estimates of the extent, time frames, and resources required to correct its high-risk vulnerabilities.
|
Hundreds of tons of plutonium and highly enriched uranium (HEU) have accumulated worldwide, and inventories of plutonium are expected to continue to grow in years to come as a result of reprocessing or recovering activities. Tracking and accounting for these and other nuclear materials are important in order to (1) ensure that nuclear materials are used only for peaceful purposes; (2) help protect nuclear materials from loss, theft, or other diversion; (3) comply with international treaty obligations; and (4) provide data to policymakers and other government officials. The United States regulates and controls its exports of civilian-use nuclear materials through three mechanisms—agreements for cooperation, export licenses, and subsequent arrangements. Subsequent arrangements refer to the regulatory controls over certain cooperative arrangements for the supply, use, or retransfer of nuclear materials. Certain controls in the agreements for cooperation are designed to assure both the United States and the recipient nation or group of nations that materials transferred between parties will be used for authorized purposes only and will be properly safeguarded. (See app. I for a discussion of U.S. export license processes.) As of November 1994, the United States had 29 agreements for cooperation with other countries. In addition, the United States, as well as many members of the international community, relies on the International Atomic Energy Agency (IAEA) to develop and enforce effective international safeguards—technical measures designed to detect the diversion of significant quantities of nuclear materials from peaceful uses—for nuclear materials of U.S. and non-U.S. origin. The U.S. agreement with IAEA, as well as some of the U.S. agreements for cooperation, requires the United States to maintain a system of accounting and control over source and special nuclear materials. In addition, the United States reports data to IAEA on nuclear materials imported by and exported from the United States. DOE’s automated tracking system, the NMMSS, is used to fulfill these accounting, controlling, and reporting obligations for U.S.-supplied international nuclear materials. DOE and the Nuclear Regulatory Commission (NRC) cosponsor the NMMSS, and it is managed and operated by a DOE contractor—Martin Marietta Energy Systems, Incorporated. The NMMSS has been used to account for U.S. imports and exports of nuclear materials since 1977. The NMMSS data base contains data on U.S.-supplied international nuclear materials transactions, foreign contracts, import/export licenses, government-to-government approvals, and other DOE authorizations, such as authorizations to retransfer U.S.-supplied materials between foreign countries. The NMMSS also maintains and provides DOE with information on domestic production and materials management, safeguards, physical accountability, financial and cost accounting, and other information related to nuclear materials. In addition, the NMMSS provides NRC with data on nuclear materials accountability and safeguards for NRC licensees. The United States relies primarily on the NMMSS to track the nuclear materials that it exports to foreign countries. However, this system does not have all of the information needed to track the current location and status of all nuclear materials of U.S. origin that are supplied to foreign countries. The amounts, types, and reliability of the data contained in the NMMSS depend largely on data reported under the international agreements for cooperation, as well as on foreign countries’ and on U.S. and foreign facilities’ willingness to report complete and accurate data. The NMMSS’ international tracking capability is limited primarily because the agreements for cooperation do not require foreign countries to report data on the current locations of U.S.-supplied nuclear materials. For example, as we reported in 1982 and 1985, the U.S. agreement for cooperation with the European Atomic Energy Community (EURATOM)does not require most EURATOM countries to inform the United States of retransfers of U.S.-supplied materials from one EURATOM country to another EURATOM country, or to report alterations to U.S.-supplied nuclear materials in most of these countries. In addition, none of the existing agreements for cooperation require foreign countries to report intracountry transfers of U.S.-supplied materials from one facility to another. Thus, the NMMSS may not contain correct and current data on either which EURATOM country has U.S.-supplied nuclear materials or at what specific facilities these materials are located. The NMMSS’ international tracking capability also is limited because the data base does not contain certain data on the current status (i.e., whether the materials are irradiated, unirradiated, fabricated, burned up, or reprocessed) of all U.S. nuclear materials that have been exported to foreign countries, with the exception of Sweden, Australia, and Canada. The NMMSS contains status data about U.S.-supplied nuclear materials in these three countries because the United States performs annual reconciliations with them. The reconciliations compare the NMMSS’ data to the foreign countries’ records. The NMMSS’ data are then adjusted, where necessary, to reflect the current status of U.S.-supplied materials in those countries. However, for foreign countries that do not participate in reconciliations with the United States, the NMMSS contains data only on the export transactions and on transactions requiring U.S. approval (such as retransfers of the nuclear materials) that occurred subsequent to the export, as required by the agreements for cooperation. The United States has also started an initial nuclear materials reconciliation with Japan, which illustrates the potential for substantial differences between data recorded in the NMMSS and the current status of U.S.-supplied nuclear materials in a foreign country. According to the NMMSS’ data, Japan produced approximately 20.3 metric tons of plutonium from U.S.-supplied nuclear materials between 1978 and 1992. However, Japanese records indicated that Japan produced about 58.7 metric tons of plutonium from U.S. nuclear materials during that period. The DOE official who is performing the reconciliation cited two primary reasons for this difference. First, Japan was required to report to the United States only the amount of plutonium retransferred to other countries for reprocessing; thus, plutonium produced but not sent to other countries for reprocessing was not reported to the United States. Second, the current U.S.-Japanese agreement requires Japan to report certain retransferred-plutonium transactions under a unique quarterly reporting arrangement. The NMMSS was not modified to reflect this unique reporting arrangement and therefore did not contain data on the amount of plutonium that Japan reprocessed from U.S.-supplied nuclear materials after July 17, 1988—the date of the new agreement. A DOE official stated that the NMMSS was recently modified to accept this reporting arrangement, and Martin Marietta has begun entering these data in the system. The reliability of the NMMSS’ data is also contingent on the willingness of foreign countries and U.S. and foreign facilities to report complete and accurate data on nuclear materials imports, exports, and retransfers. Although the NMMSS users whom we interviewed, such as members of the NMMSS Steering Committee, were generally or very satisfied with the accuracy and completeness of information from the NMMSS, DOE occasionally has found instances of incomplete reporting while reconciling nuclear materials transactions. For example, in 1990 a reconciliation of the NMMSS’ data with a foreign country’s records identified several transactions, such as retransfers of low-enriched uranium, that had not been reported to the United States. These transactions were subsequently entered into the NMMSS. However, because the NMMSS does not distinguish between normal transactions and those added during the reconciliation process, we could not determine how many other NMMSS entries were added as a result of reconciliations with foreign countries. A DOE official stated that many transactions may be added to the NMMSS during the initial reconciliation with a foreign country, but in later years such entries are infrequent. The extent to which the NMMSS can provide data on nuclear materials is also affected by the accuracy and availability of historical records. We have previously reported on problems in this area. For example, in 1985 we reported numerous errors in the international data contained in the NMMSS. These errors resulted from inaccurate data entries as well as from missing documents of some historical transactions. A DOE official told us that DOE attempted to upgrade the accuracy of the NMMSS’ international data by searching for old records documenting historical transactions. This official stated that the current NMMSS data base contains the best available data on historical transactions, given the limitations of these records. Some NMMSS users also told us that although older NMMSS data are sometimes inaccurate, they are the best data available. Because the NMMSS was an older system, DOE decided to replace and modernize it. However, DOE decided to merely replicate the functions of the current NMMSS, and therefore its limitations will remain. In addition, DOE did not adequately plan the development effort for the new NMMSS. For instance, DOE did not identify and define users’ needs or adequately explore design alternatives that would best achieve these needs in the most economic fashion. DOE could have reduced the likelihood that these planning deficiencies would occur by following the software development requirements set forth in its own software management order. Martin Marietta’s NMMSS is housed on a mainframe using unstructured COBOL code. Performing modifications on the NMMSS and designing custom reports is difficult because of the volume and complexity of the code. As a result, DOE believed that the NMMSS’ operating costs could be reduced by modernizing the system’s hardware and software. In addition, NRC supported DOE’s decision to modernize the NMMSS’ hardware and software because it believed that the replacement NMMSS would be less costly than Martin Marietta’s existing system. Accordingly, DOE’s Office of Arms Control and Nonproliferation tasked the Lawrence Livermore National Laboratory with developing a new NMMSS data base that would replicate the functions of Martin Marietta’s NMMSS. Livermore hired a subcontractor to perform this task. Livermore’s subcontractor wrote new software, developed a PC-based data base, and will operate the new NMMSS at its facility. In planning for the development of the new NMMSS, DOE did not analyze the users’ requirements. Such an analysis documents the organization’s functional and informational needs, the current system and its effectiveness, and the organization’s future needs. Such information is important because the more knowledge that is generated about potential system users and their operational needs, the more likely it is that the resulting system will meet the users’ needs. In addition, identifying users’ needs at the beginning of a development effort can help to reduce the need for later systems modifications, which are typically more expensive, and to eliminate the need for separate development efforts. Since the NMMSS’ primary functions were developed during the late 1960s (for DOE facilities) and 1970s (for international reporting), it was particularly important that DOE, before the subcontractor’s development effort, determine whether the NMMSS was meeting users’ needs in the most effective manner, or whether changes in the design of the data base were needed to better serve its users. DOE could have assessed users’ needs by involving the NMMSS Steering Committee, which is composed of the major NMMSS users, in the new NMMSS planning process. Although the NMMSS Steering Committee is charged with reviewing and commenting on significant proposed changes to the NMMSS, it was not consulted about the conversion from Martin Marietta’s NMMSS to the subcontractor’s new NMMSS. Most of the Steering Committee members were unaware that DOE was even considering a new system until months after the decision to develop a new NMMSS was initiated. Some Steering Committee members told us they felt that they were deliberately kept in the dark about the new NMMSS. For example, one Steering Committee member said he believed written notification of the new NMMSS was not provided because DOE headquarters did not want to give users the opportunity to raise any objections to the program. Another member said the Committee members felt that they had been ignored and misled about the proposed changes in the NMMSS’ operations. Furthermore, several Committee members and other NMMSS users wrote to DOE’s Office of Nonproliferation and National Security to express dissatisfaction that no effort had been made to involve the Steering Committee in the departmental decision-making process. In explaining why users’ requirements were not assessed, DOE officials stated that since the new NMMSS data base will duplicate the existing NMMSS’ functions, a requirements analysis was unnecessary. They stated that users will be consulted on future enhancements to the data base. However, such an approach can result in a data base that perpetuates system weaknesses and leads to inefficiencies. For example, the current NMMSS’ financial module does not contain all of the inventory valuation data needed by DOE’s Office of the Chief Financial Officer (CFO). Since the new NMMSS is replicating the current NMMSS’ functions, it too will not contain these data. In addition, because the Office of the CFO was not aware that changes to the NMMSS were being considered, in August-September 1993 the Office of the CFO sponsored, and a programmer began developing, a new system to satisfy these needs. An official within the Office of the CFO told us that if the Office had known about the new NMMSS development effort, they would have considered working with the new NMMSS development team to enhance the NMMSS’ financial module, rather than developing a separate new system. The purpose of an alternatives analysis is to compare and evaluate the costs and benefits of various alternatives for meeting users’ requirements and to determine which alternative is most advantageous to the government. However, DOE did not perform such an analysis for the new NMMSS development effort. Instead, DOE’s analysis was limited to a cost comparison of two alternatives: (1) to have Martin Marietta modernize the NMMSS or (2) to have the Livermore subcontractor provide a new NMMSS data base. Furthermore, this analysis did not assess the benefits of the two alternatives and was not used to determine which alternative was most advantageous to the government because it was prepared after DOE had already chosen to implement the second alternative. In addition, because the new NMMSS will simply replicate the current NMMSS’ functions, it will be subject to the same nuclear materials tracking limitations that existed previously. Thus, the data contained in the new NMMSS on the status and location of U.S.-supplied nuclear materials internationally will continue to be limited by the data reported under the agreements for cooperation. In addition, the comparison of costs for the two alternatives cited in the analysis was not supported by adequate documentation and did not appropriately consider all relevant costs to ensure that DOE chose the most cost-effective alternative. Moreover, DOE had already decided to authorize the subcontractor to begin building the new NMMSS before this analysis was prepared. DOE’s cost analysis compared the estimated development cost and fiscal years 1994, 1995, and 1996 operating costs of the subcontractor’s new NMMSS data base with Martin Marietta’s upgrade proposal for the NMMSS. However, the documentation provided to support this analysis was inadequate. Specifically, the only documentation offered in support of the new NMMSS was a one-page document provided by Livermore’s subcontractor, which DOE did not independently verify. The cost analysis was also inadequate because it (1) did not include costs to develop the new NMMSS incurred by Livermore’s subcontractor before the analysis; (2) included fiscal year 1997 costs in Martin Marietta’s alternative but not in the subcontractor’s alternative; (3) did not reduce Martin Marietta’s estimated costs by the amount of indirect costs that will continue to be incurred by Martin Marietta (and paid by DOE) even if Martin Marietta no longer operates the NMMSS; and (4) included the NMMSS’ operating costs during development in the estimate for Martin Marietta’s alternative but did not include these costs in the subcontractor’s estimate. DOE’s cost comparison also did not take into account the considerable costs to transition from Martin Marietta’s NMMSS to the new NMMSS data base housed at the Livermore subcontractor’s location. Moreover, the analysis did not consider any costs that Livermore will incur managing and overseeing the subcontractor’s development of the new NMMSS. We analyzed the cost documentation that DOE provided, taking the above factors into consideration. Although we could not determine with certainty whether DOE chose the more cost-effective alternative, since some cost data were not available, our analysis did determine that any potential savings are, at best, questionable and that upgrading Martin Marietta’s NMMSS may have been a more cost-effective option. Because of the flaws in DOE’s initial cost analysis, we asked DOE to provide us with a total life cycle cost for the new NMMSS. As of November 21, 1994, DOE could not provide us with this information. Many of the new NMMSS’ planning deficiencies could possibly have been avoided if DOE’s Office of Information Resource Management Policy, Plans, and Oversight had been involved in the development effort. DOE’s Computer Software Management order (DOE 1330.1D) requires that this Office approve or disapprove all administrative or manufacturing-oriented software acquisition or development efforts that will have an external impact. An official in the Office of Information Resource Management Policy, Plans, and Oversight told us that both the current NMMSS and the new NMMSS fall under the software categories covered by this order. Another official in this Office stated that adequate requirements and alternatives analyses (including the costs and benefits of alternatives) are required before approval is granted. However, the Office of Arms Control and Nonproliferation neither sought nor received such approval for the new NMMSS development effort. DOE’s Program Manager told us that he believed the DOE order did not apply because the new NMMSS was duplicating an already existing system. However, the order does not exclude software development efforts that duplicate existing systems. According to DOE, the NMMSS was not intended or designed to track foreign countries’ nuclear materials that were never imported to the United States. Accordingly, since the new NMMSS is replicating the functions of Martin Marietta’s NMMSS, the new system will also have this limitation. Recognizing that the NMMSS does not contain such data, and given the NMMSS’ other data limitations, the United States relies on other sources to obtain information on nuclear materials of both U.S. and foreign origin that are located in foreign countries. For example, the United States has relied on DOE and other agencies to help determine the quantity, location, origin, and characteristics of commercial plutonium in noncommunist countries. DOE also uses data provided by intelligence sources and technology to support nuclear materials nonproliferation programs. We did not assess the reliability of these information sources. However, according to the recent Rand study performed for the Under Secretary of the Department of Defense, no intelligence community can know of all of the major nuclear facilities and activities in certain countries. For example, according to an official from the Arms Control and Disarmament Agency, U.S. intelligence sources lacked reliable information on North Korea and Iraq. The Director of DOE’s International Safeguards Division told us that the need for an international nuclear materials tracking system is clear and that if the U.S. system for tracking materials had been more effective, the United States might have known more about Iraq’s nuclear program before Desert Storm. DOE has initiated efforts to improve the United States’ ability to track nuclear materials internationally. We are reporting to you classified information on these efforts and their limitations separately. To ensure the physical protection of exported U.S.-supplied civilian-use nuclear materials, the United States relies on the protection systems in recipient countries, these countries’ compliance with IAEA’s guidelines, and U.S. evaluations of the adequacy of their physical protection systems (e.g., security devices and guards, etc.). Once the United States exports nuclear materials, it is the responsibility of the recipient country to adequately protect them. While no international organization is responsible for establishing or enforcing physical protection standards, IAEA has developed guidelines that are broadly supported by its member states. These guidelines include protection measures such as the use of physical barriers along the perimeters of protected areas. The United States uses these guidelines to help evaluate whether foreign countries’ physical protection systems are adequate. As a result of these evaluations, the United States may make nonbinding physical protection recommendations. The international community, including the United States, has supported states’ sovereign rights and responsibilities to establish and operate physical protection systems for nuclear materials and facilities. It is also in the best interest of the sovereign states to ensure the physical protection of these materials to reduce the threat of theft or diversion. Concerns have been expressed about the physical protection of U.S.-supplied nuclear materials at the High Flux Petten Reactor in the Netherlands. Reportedly, Dutch Marines staged a mock attack on the facility and gained access to its HEU. During this review, we visited the High Flux Petten Reactor and met with Dutch officials, who confirmed that this incident, which was intended to test the facility’s physical security system, did occur. These officials also noted that physical security at the reactor has improved since the incident took place. Although the ultimate responsibility for the protection of nuclear materials resides with the sovereign state, according to IAEA the protection of these materials is a matter of international concern and cooperation. Nevertheless, no international organization is currently responsible for establishing physical protection standards or ensuring that nuclear materials are adequately protected from unauthorized removal and that facilities are protected from sabotage. However, beginning in 1972, IAEA convened international experts to establish and subsequently revise guidelines on the physical protection of civilian-use nuclear materials. These guidelines represent a broad consensus among IAEA’s member states on the requirements for physically protecting nuclear materials and facilities. IAEA also assists states that request guidance on physical protection by providing international physical protection experts as consultants. The United States supports these assistance efforts and provides experts when requested. The United States also evaluates foreign countries’ physical protection systems under the U.S. Bilateral Physical Protection Program. According to DOE, the primary objective of this program is to fulfill U.S. statutory obligations under the Atomic Energy Act of 1954, as amended by the Nuclear Non-Proliferation Act of 1978, and the provisions of specific U.S. agreements for cooperation. These obligations require that the United States ensure that U.S.-supplied nuclear materials are subject to a level of physical protection that meets or exceeds IAEA’s guidelines. In addition, other objectives of this program are to (1) address emerging nuclear proliferation threats and problems, (2) promote technical exchanges and cooperation for physical protection, and (3) strengthen international cooperation and the implementation of treaties and agreements. According to DOE, the countries participating in the U.S. Bilateral Physical Protection Program do so principally because they have or expect to have an agreement for peaceful nuclear cooperation with the United States, or a trilateral supply arrangement with IAEA and the United States; U.S.-supplied nuclear materials; category I quantities of nuclear materials; and/or a pending U.S. nuclear export or supply arrangement. U.S. teams are led by a DOE representative and usually include officials from other agencies. The teams visit a variety of nuclear facilities, including research reactors, fuel cycle facilities, and nuclear power reactors. According to an NRC official, these visits have also been an important source of information when NRC assesses a country’s physical protection system as part of the process of reviewing export license applications. Since 1974, the United States has conducted bilateral consultations with approximately 46 nations, including site visits to review the physical protection of nuclear materials at fixed sites and during transport. (App. II identifies the countries that U.S. officials have visited.) More recently, program officials have started to explore possible technical cooperation and information exchanges with the newly formed states of the former Soviet Union and Eastern Europe. According to DOE, the U.S. site visit teams will make nonbinding recommendations for improvements to physical protection when such improvements are needed. In cases in which countries have been revisited, efforts are made to follow up on the previous team’s recommendations. However, according to a DOE official, DOE does not have a mechanism to follow up on previous recommendations in between visits and has not always monitored the status of the sites visited. He said that a mechanism to follow up on recommendations in between visits is important, since some countries may not be revisited for 4 to 5 years. DOE’s NMMSS has significant limitations in its ability to track nuclear materials internationally; these limitations will continue under DOE’s new NMMSS. In particular, the new NMMSS will not overcome previously existing nuclear materials tracking limitations that are often caused by non-system-related problems; for example, the system does not contain data that are not required to be reported under the U.S. agreements for cooperation. We believe DOE should have explored systems alternatives and queried its intended users to attempt to mitigate some of these limitations. In addition, because DOE has not followed good systems development practices, DOE cannot ensure that the system will be cost-effective or will even fulfill the needs of its major users. Before investing further resources in the new NMMSS, we recommend that the Secretary of Energy direct the Office of Arms Control and Nonproliferation to determine users’ requirements, investigate alternatives, conduct cost-benefit analyses, and develop a plan to meet any identified needs, either through enhancing the new NMMSS or designing a different system. We discussed the contents of this report with the Director of DOE’s Office of Export Controls and International Safeguards, officials in the State Department’s Office of Nuclear Energy Affairs, and the Director of NRC’s Division of Nonproliferation, Exports, and Multilateral Relations. However, as requested, we did not obtain written agency comments on a draft of this report. The DOE, State Department, and NRC officials that we spoke with generally agreed with the facts presented. DOE also provided the following comments, which we evaluated. DOE officials commented that the NMMSS’ size and complexity and its role in meeting U.S. treaty and statutory obligations led DOE to focus initially on duplicating NMMSS’ functions and not on upgrading the system; such an upgrade will be considered after the duplication effort has been successfully accomplished. We believe that the size and complexity of the NMMSS and its pivotal role in meeting U.S. treaty and statutory obligations should have compelled DOE to ensure that the system was planned and designed properly. As we point out in the report, DOE’s decision to duplicate the existing NMMSS’ functionality led to a system that may not meet users’ needs and that perpetuates the existing system’s weaknesses. Furthermore, program modifications to upgrade systems at a later time are typically more costly and more risky than initially programming the system to meet users’ needs. Our work was performed between October 1993 and November 1994, in accordance with generally accepted government auditing standards. Appendix III describes the scope and methodology of our review. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to appropriate congressional committees; the Secretaries of Energy and State; and the Chairman, Nuclear Regulatory Commission. We will make copies available to others upon request. Please call us at (202) 512-3841 and (202) 512-6222, respectively, if you or your staff have any questions. Major contributors to this report are listed in appendix IV. The United States regulates its exports of U.S.-supplied nuclear materials to countries with U.S. agreements for cooperation through the implementation of the U.S. nuclear materials export license process. The Nuclear Regulatory Commission (NRC) is responsible for issuing export licenses for nuclear materials. In accordance with the Nuclear Nonproliferation Act of 1978 and the Department of Energy’s (DOE) regulations, the executive branch agencies (DOE, the Departments of Commerce, Defense, and State and the Arms Control and Disarmament Agency), led by the Department of State, assist NRC in reviewing export license applications in certain cases. NRC generally grants export licenses if the following criteria are met: The International Atomic Energy Agency’s (IAEA) safeguards will be applied pursuant to the Treaty on the Nonproliferation of Nuclear Weapons and the Treaty of Tlatelolco. No material will be used for a nuclear explosive device or for research on or the development of a nuclear explosive. Adequate physical protection measures will be maintained for facilities and materials. No material will be retransferred without U.S. consent. The exported material will not seriously prejudice U.S. nonproliferation objectives or jeopardize the common defense and security. No material will be reprocessed or altered in form or content without previous approval from the United States. Material will be under the terms of the agreement for cooperation. As figure I.1 outlines, to apply for a license to export special nuclear materials, an application must be submitted to NRC. NRC checks the application for completeness and accuracy and determines if an executive branch review (DOE, the Departments of Commerce, Defense, and State and the Arms Control and Disarmament Agency) is required. Executive branch reviews are necessary if, among other things, the export is (1) more than 1 effective kilogram of highly enriched uranium or 10 grams of plutonium or U-233 or (2) if source materials (uranium, thorium, or any ores containing uranium or thorium) or special nuclear materials are to be exported under the U.S.-IAEA Agreement for Cooperation. The executive branch review determines if the export request meets U.S. export criteria; the proposed export would not be inimical to the common defense and security of the United States; and where available, the exported materials would be under the terms of an agreement for cooperation. NRC may request the executive branch to address specific concerns and to provide additional data and recommendations. If the executive branch and NRC determine that the request satisfies the above criteria, NRC will approve the export license. The export license establishes the amount of material that the applicant may export and the time frame in which that amount may be exported. The applicant may make multiple shipments of the material to reach the specified amount on the license. Export application submitted to NRC reviews application for completeness and accuracy and determines whether executive branch review is necessary. to review? Executive branch agencies review license application against export criteria and obtain letter of assurance of assurance is required, NRC requests either DOE or State Dept. to obtain it. from relevant country or countries. Approved? *NRC's Office of International Programs refers significant cases to the Commission (NRC) for review, as outlined in C. F. R. 110.40. In 1993, the United States received 89 export license applications for nuclear materials (source, special nuclear material, and by-product) of which 71 were issued in 1993 and 11 were subsequently issued by May 5, 1994. Of the remaining six applications, five are pending and one was withdrawn by the applicant country as of May 5, 1994. According to an NRC official, of the five pending applications, the United States is awaiting letters of assurance, as required, from the applicants before making a decision. A letter of assurance is a statement from the government of the recipient country that the nuclear materials will be handled in accordance with the terms set forth in the relevant U.S. agreement for cooperation. Once nuclear materials are exported from the United States, they are subject to the controls contained in cooperative arrangements established in the terms of U.S. agreements for cooperation. The subsequent arrangements and retransfer process are regulatory controls used to control the supply, use, or retransfer of exported U.S.-supplied nuclear materials and equipment. Activities that can be subject to subsequent arrangements are the reprocessing of spent fuel or the retransfer of nuclear materials to a third country. Generally, these requirements enable the United States to determine that the arrangement or retransfer will not be inimical to the common defense and security of the United States. As figure I.2 outlines, DOE is generally the lead agency for processing subsequent arrangements and retransfer requests and coordinating the interagency review required for these requests. These interagency reviews provide the Departments of Commerce, Defense, and State and the Arms Control and Disarmament Agency and NRC the opportunity to review the request. For subsequent arrangements, the State Department must approve the arrangement in order for it to proceed, and the Arms Control and Disarmament Agency must determine whether or not the arrangement requires a nonproliferation assessment statement. After the interagency review, DOE will make a determination on the basis of its and the participating executive agencies’ views. If, during the interagency review, any agency believes the request raises issues requiring more extensive consideration or denial, the request may be submitted for further discussion and concurrence to the Subgroup on Nuclear Export Coordination. This interagency group examines dual-use export issues, retransfers, and related matters to determine that the proposed activity is consistent with U.S. foreign policy, national security, and nonproliferation objectives and that commercial and economic considerations can be established. Department of Energy reviews and branch agencies for their reviews and comments. provide comment to DOE. DOE makes a preliminary decision on the basis of its review and agencies' comments. (For subsequent arrangements, the State Dept. must approve prior to DOE.) Approved? Published in Federal Register and approval given to requesting country. *For subsequent arrangements, the Arms Control and Disarmament agency may determine that a nonproliferation assessment is needed as of its review. To determine the tracking limitations of DOE’s Nuclear Materials Management and Safeguards System (NMMSS), we reviewed reports by NRC and DOE consultants and the U.S. agreements for cooperation. We also examined the NMMSS’ documentation and other documents pertaining to the system and interviewed DOE, NRC, and Martin Marietta officials. While we did not interview a statistical representation of NMMSS users, we did interview members of the NMMSS Steering Committee and other major users to obtain their views on the accuracy and completeness of the NMMSS’ data. To assess DOE’s new NMMSS, we interviewed DOE, Livermore, and Argonne National Laboratory program officials, NMMSS Steering Committee members, and other NMMSS users. We also reviewed the new NMMSS’ planning documentation. We also spoke with officials with Livermore’s subcontractor, reviewed the subcontracts, and reviewed the subcontractor’s technical and cost proposals. In addition, we interviewed officials from the State Department, the Arms Control and Disarmament Agency, the Central Intelligence Agency, DOE’s Pacific Northwest Laboratory, and the Department of Defense to determine whether other tracking systems exist. To determine the U.S. process for evaluating the physical protection of foreign facilities, we interviewed officials from DOE, NRC, and the State Department. In addition, we also reviewed program documentation, including the results of U.S. site visits. To understand the export license and subsequent arrangement process, we reviewed 10 C.F.R. Part 110 and interviewed DOE and NRC officials. We performed our review primarily at DOE’s headquarters at Washington, D.C., and Germantown, Maryland, locations; DOE’s Lawrence Livermore National Laboratory, Livermore, California; Oak Ridge Operations Office and Y-12 Plant in Oak Ridge, Tennessee; Pacific Northwest Laboratory in Richland, Washington; and NRC’s headquarters in Rockville, Maryland. We also visited the High Flux Petten Reactor in the Netherlands. John A. Carter, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed how the United States tracks its exported civilian nuclear materials and ensures their physical protection, focusing on the: (1) capabilities of the Department of Energy's (DOE) computerized Nuclear Materials Management and Safeguards System (NMMSS) to track the international movement of nuclear materials; and (2) adequacy of DOE planned new NMMSS. GAO found that: (1) the United States relies primarily on NMMSS to track exported nuclear materials, but the system does not have enough information to track all nuclear materials that are supplied to foreign countries; (2) the reliability of NMMSS data depends on the data reported under international agreements, as well as foreign countries' willingness to report complete and accurate data; (3) the new NMMSS will replicate current NMMSS functions and contain the same tracking limitations that currently exist; (4) DOE has not adequately planned the development effort for the new NMMSS and cannot ensure that the new NMMSS will meet users' needs; (5) neither the current nor planned new NMMSS can provide data on nuclear materials of foreign origin; (6) DOE collects information on nuclear materials worldwide through other sources that may not always be accurate; (7) the U.S. government's ability to ensure that exported nuclear materials are adequately protected is contingent on foreign countries' cooperation; and (8) while the United States conducts on-site evaluations of foreign countries' physical protection systems, recommendations that may result from these visits are not binding on the country.
|
OAM resources are divided among 70 air and marine locations across three regions (southeast, southwest, and northern); the National Capital area; and National Air Security Operations Centers (NASOC) throughout the continental United States, Puerto Rico, and the U.S. Virgin Islands as shown in figure 1. OAM also has mission support facilities including those for maintenance, training, and radar-tracking to detect and direct interdiction of illegal aircraft and maritime vessels. OAM strategic assumptions in deploying its resources include the ability to provide a 24- hour, 7-day a week response to border penetrations anywhere along the U.S. border, with a 1-hour response time for areas designated as high priority. Considerations in OAM allocation decisions include historical location, congressional direction, and differences in geography and relative need for air and marine support to address threats. As of May 2011, OAM had placed about half of its air assets on the southwest border region and the remainder on the northern and southeast regions, while marine resources were distributed fairly evenly across the northern, southwest, and southeast regions. OAM has 23 branches and 6 NASOCs across these regions, and within the branches, OAM may have one or more air or marine units. OAM performs various missions in response to requests for air and marine support from other DHS components—primarily Border Patrol and ICE; as well as other federal, state, and local law enforcement agencies. In addition, OAM is a representative on the Joint Interagency Task Force- South, located in Key West, Florida, a unified command sponsored by the White House Office of National Drug Control Policy that facilitates transnational cooperative counter-narcotic and counterterrorism efforts throughout the South America source zone and the Caribbean, eastern Pacific, Central America, and Mexico transit zone. OAM’s NASOCs perform specialized missions nationwide and in the Caribbean, eastern Pacific, and Central America, using unmanned aircraft systems, long- range patrol aircraft, and other aircraft. Control of OAM resources to respond to these support requests differs by location. For the northern and southwest regions, OAM branches and units are under the tactical control of the local Border Patrol sector chief, who has authority to approve, deny, and prioritize requests for air and marine support. In contrast, OAM branch directors have the authority to control how air and marine resources are used in the southeast region— where there is less Border Patrol presence, as well as in the National Capital area and in NASOCs. The majority of OAM operations is in support of customer or self-initiated law enforcement missions.patrols to detect illegal activity; search for illegal aliens; surveillance; and transport of Border Patrol, ICE, and other law enforcement officers and their equipment. OAM also performs non-enforcement missions including those to support maintenance, training, public relations, and to provide humanitarian aid. Over the last 3 years, the proportion of air and marine mission hours (flight hours or hours a vessel was on duty) for law enforcement related missions has increased, as shown in table 1. The percentage of OAM air and marine support requests met differed by location, customer, and mission type, with unmet air support requests primarily due to aircraft maintenance and unmet marine requests due to adverse weather in fiscal year 2010. In addition, OAM, Border Patrol, and ICE officials reported that OAM resources were constrained in some locations. Further, although OAM has taken actions to address challenges in providing air and marine support, its efforts to increase aircraft availability have not been fully realized. OAM met 73 percent of the 38,662 total air support requests that it received in fiscal year 2010, according to our analysis of AMOR data.OAM tracks its ability to meet air support requests by location, customer, and mission in its AMOR system. Our analysis of these data showed that the percentage of air support requests OAM met differed by region or branch location, and to a lesser extent, by customer and mission type. Specifically, the percentage of air support requests met ranged by 29 percentage points across regions (from 60 to 89 percent) and ranged by over 50 percentage points across branches (from 43 to 96 percent), while the percentage of requests met across customers ranged by about 14 percentage points (from 76 to 90 percent) and the percentage of requests met across mission types ranged by 24 percentage points (from 61 to 85 percent). OAM air support requests met differed by up to 29 percentage points across five different OAM regional areas of responsibility (i.e., regions). The highest percentage of support requests met was provided to OAM’s NASOCs and the lowest percentage of support requests met was provided to the U.S. southeast region, as shown in figure 2. The percentage of air support requests met across branches and NASOCs showed greater differences than across regions, particularly across branches in the southwest region, as shown in table 2. There were smaller differences in OAM’s ability to meet requests for air support across customers than across locations. The overall percentage of air support requests met across customers ranged from a low of 76 percent for Border Patrol and OAM to a high of 90 percent for all other federal agencies, as shown in figure 3. Border Patrol has control over OAM mission support priorities in the northern and southwest regions, and OAM has control over its priorities in the southeast region. To increase transparency of ICE support requests, OAM, Border Patrol, and ICE established a process requiring that ICE requests that are denied at the field level be elevated to management. Finally, our analysis of AMOR data showed that there were few concurrent support requests that resulted in denial of one agency’s request to support another agency. For example, of the 38,662 requests for air support in fiscal year 2010, 2 percent (915) could not be met due to a competing mission request from the same or another agency. OAM headquarters officials gave the following possible explanations as to why state and local, and all other federal agencies had higher support rates than Border Patrol or OAM. State and local support frequently involved OAM diverting a flight already in progress; in such cases, aircraft availability challenges were not an issue. As a result, OAM was able to provide the support to the state and local agency resulting in higher support rates. Federal agencies (as in the “all other federal agencies” category in figure 3) and state agencies (as in the “state and local agencies” category in figure 3) often require types of aircraft that have greater availability in general. Standing, daily requests—which were most common to Border Patrol—were more likely than ad hoc requests to be canceled as a result of adverse weather, maintenance, or aircraft and personnel restrictions. As a result, Border Patrol may have more unmet requests than other agencies. The difference in percentage of support requests met across mission categories ranged from 61 to 85 percent, with higher levels of support for miscellaneous enforcement activities such as reconnaissance, photography, or information. The percentage of air support was lower for mission activities classified as search, interdiction, or radar patrol, as shown in figure 4. OAM officials told us that there were too many variables, such as budget and resource constraints, weather, and conflicting mission priorities, to explain why there were differences in percentages of support requests met for different mission types. OAM was unable to meet 27 percent, or 10,530 of the 38,662 air support requests it received from customers in fiscal year 2010. The primary reason for unmet requests was the unavailability of aircraft in maintenance, but adverse weather and unavailable aircrew were also factors, as shown in figure 5. OAM survey respondents were generally satisfied with the type and number of air assets they had to perform various missions; however, some survey respondents and field officials we interviewed identified capability gaps, such as the lack of maritime patrol aircraft. In addition, survey respondents and field officials reported general dissatisfaction with the number of personnel to perform air operations. Finally, OAM has taken actions to increase aircraft availability—including creating an aircraft modernization plan and conducting an aged-aircraft investigation—but these efforts have not been fully realized. The majority of officials that responded to our survey questions from 18 OAM air locations across the southwest, northern, southeast, and National Capital regions, and NASOCs generally reported that they were either satisfied with, or neutral—neither satisfied nor dissatisfied—toward the type and number of OAM aircraft they had at their locations to perform various mission activities. For example, 16 of 18 respondents reported satisfaction with the type of aircraft available for surveillance; and 12 of 18 respondents reported satisfaction with the number of aircraft they have to perform information gathering. A majority of respondents also expressed satisfaction or neutrality toward the type and number of aircraft they have to perform 12 other mission activities. Some respondents, however, identified capability gaps and resource limitations for certain mission activities. For example, officials from 7 of the 14 air locations that perform air-to-water radar patrols reported that they were very dissatisfied with the type of aircraft available to conduct these missions.respondents from 7 of the 17 air locations that perform interdictions expressed dissatisfaction with the number of aircraft available to conduct these missions. One respondent reported that his/her location had no maritime or air radar interdiction capabilities, despite having a border that was entirely water. See appendix IV for a summary of survey results by location for respondents’ satisfaction with the type and number of assets for various mission activities. The Northern Border Regional Director said, among other things, he would like to see an additional interceptor aircraft placed in one branch location, but that the runway is too short—the current runway is 4,000 feet and a Citation needs at least 7,000 feet. OAM headquarters officials said that the branch is routinely required to get additional support from neighboring branches. the maritime environment and that two branches needed more maritime patrol aircraft. The Southwest Regional Director said he did not have information regarding what the southwest region’s needs were in terms of air assets because the southwest region had not performed an assessment in 2 years. OAM, Border Patrol and ICE officials at field locations we visited in the northern, southeast, and southwest regions expressed various levels of satisfaction with OAM’s air support and capabilities. For example, Border Patrol and ICE officials in one northern border location said they were generally satisfied with OAM’s air support. Similarly, the Acting Special Agent in Charge for the ICE office in the southeast region said he was generally satisfied with OAM’s air support; however, a Border Patrol Assistant Chief for a southeast region sector said OAM had not been responsive to their air support requests. branch officials said the air assets at their location were barely sufficient to meet support requests for its various missions, and ICE officials said they would like to see OAM procure better aircraft for their surveillance needs. In addition, Border Patrol officials in the same southwest location said that while the sector receives substantial OAM air support, OAM as an agency is not adequately resourced in budget, facilities, air frames, or technology to meet operational requirements. Similarly, Border Patrol, OAM, and ICE field officials in another southwest region location said OAM lacked the capability to perform effective maritime (air to water) patrols, and ICE officials in that southwest region location said that helicopters were often not available on short notice. A Border Patrol Assistant Chief for one southeast sector said that in some instances, Border Patrol agents may not have asked for air support in fiscal year 2010 because they thought they might not receive it. He said that agents are currently encouraged to ask for support whether or not they believe they will receive it. Lastly, officials from the Joint Interagency Task Force-South (JIATF-S) said they were pleased with the support they received from OAM, but they would like higher levels of support. According to OAM officials, OAM provided aircraft support to JIATF-S primarily for long-range patrols in the source zones of South America and the transit zones of the Caribbean, eastern Pacific, Central America, and Mexico. JIATF-S officials said that OAM had specialized aircraft that were instrumental to their operations. While OAM provided more than its committed 7,200 flight hours in fiscal year 2010 to support the anti-drug mission in this area, JIATF-S officials said they would like to receive higher levels of OAM support, particularly as support from Department of Defense and other partners had been decreasing. Our survey of 18 OAM air locations found that the majority of respondents (11 of 18) were either somewhat or very dissatisfied with the extent to which they had adequate air personnel to effectively meet mission needs. In addition, field officials we interviewed in the southwest and southeast regions reported shortages in air personnel. Although the Northern Border Regional Director told us most air branches along the northern border were staffed sufficiently to meet mission needs, the Southeast and Southwest Regional Directors cited shortfalls in the level of air personnel. The Southeast Regional Director said air staff were frequently assigned to temporary duty in support of UAS and surge operations in the higher priority southwest region; and the Southwest Regional Director said they did not have adequate personnel to be able to respond 24-hours a day at each of its locations. OAM officials at the field locations we visited reported shortages in air personnel. For example, the Director of Air Operations at a northern border branch said that the branch was originally slated to have 60 pilots, but instead had 20 pilots. In addition, officials from two branches in the southwest region told us they lacked personnel due to staff being away for such reasons as temporary duty assignments, military leave, sick leave, and training, among other reasons; they said these shortages were negatively affecting their ability to meet air support requests. Further, the Deputy Director of Air Operations for one southeast region branch told us that when they received the new DASH-8 maritime patrol aircraft, they did not receive the necessary increases in personnel to operate them, and as a result, the branch could not fully utilize the capabilities of these technologically advanced aircraft. According to the branch officials, personnel problems were further exacerbated by budget constraints. OAM reported that it had taken actions to increase aircraft availability, but the results of these efforts have not yet been fully realized. OAM created an aircraft modernization plan in 2006 to replace aging aircraft, and updated this plan in 2007 with a model of projected investments over the next 10 years. OAM officials told us that due to changes in mission needs and changes in the aviation market, as well as limited funding, they have had to modify the plan and continue to maintain older and less supportable aircraft, which require more maintenance. OAM officials reported that because they have not been able to replace aircraft as postulated, they have not been able to standardize their fleet by reducing aircraft types—which would reduce costs associated with training materials and equipment, parts and spares inventories, and personnel Due to the slow pace of aged aircraft replacement and qualifications. the prospect of a constrained resource environment, OAM conducted an aged aircraft investigation in fiscal year 2010 to determine the operating life limitations of aircraft most at risk. Based on the results of this investigation, OAM plans to either retire aircraft or create sustainment regimens for certain aircraft to lengthen their service lives. Finally, OAM headquarters officials said they still plan to acquire new aircraft and reduce the number of older aircraft to eventually achieve the needed type reductions, consistent with available funding. In its 2006 aircraft modernization plan, OAM planned to reduce the number of aircraft types from 18 to 8, but as of September 2011, OAM had 20 aircraft types (including unmanned aircraft systems). OAM headquarters officials said they have deployed all-weather aircraft to locations where their capabilities will yield the highest operational dividends. They also said they would like to acquire additional all- weather aircraft, but current funding structures preclude the acquisition of more all-weather assets beyond what is currently approved. OAM officials said they are exploring additional technology and instrumentation solutions to increase their ability to conduct missions in adverse weather conditions, and that this is an ongoing process. OAM headquarters officials stated that they were also limited in their ability to increase the availability of aircrew due to staff reductions and budgetary constraints. OAM conducted a re-evaluation of its staffing in 2009, but it was never approved, as OAM had significant reductions to its work force in fiscal year 2010. Headquarters officials said the effort to redefine their work force is on hold since future funding projections prohibit program growth. OAM officials told us they have not increased staff over the past 2 fiscal years. OAM met 88 percent of the 9,913 total marine support requests that it received in fiscal year 2010, according to our analysis of AMOR data.Similar to our analysis of air support data, our analysis of marine data showed that the percentage of requests OAM supported differed by location; specifically, the percentage of marine support requests met ranged by 9 percentage points across regions (from 84 to 93 percent), and by as much as 28 percentage points across branches (from 71 to 99 percent). AMOR tracks OAM’s ability to meet marine support requests by location, customer, and mission; but data by customer were not reliable for our reporting purposes due to inconsistencies in OAM data entry practices. The percentage of marine support requests met ranged from 84 to 93 percent across three OAM regional areas of responsibility. The percentage of support requests met was fairly similar for the northern and southwest regions, exceeding 90 percent; however, support was lower (84 percent) for the southeast region, as shown in figure 6. OAM officials said possible reasons for the differences in support rates could include the fact that OAM has placed higher priority on the northern and southwest regions, and that since 2008 OAM has added assets to these regions in response to congressional direction. Within each region, the percentage of marine support requests met across branches showed disparities, particularly across branches in the southwest region. Marine support requests met ranged by 15 percentage points across branches in the southeast region (from 80 to 95 percent), by about 10 percentage points across branches in the northern region (from 89 to 99 percent), and by about 28 percentage points across branches in the southwest region (from 71 to 99 percent). Our analysis of AMOR data indicated that 94 percent of all support requests in fiscal year 2010 were for radar patrol missions, while the remaining 6 percent of requests involved interdiction, surveillance, and other miscellaneous enforcement missions. The percentage of support requests met for the remaining 6 percent of requests varied but was 86 percent overall, while the support rate for radar patrol missions was 88 percent. We were unable to report on the percentage of marine support by customer due to reliability concerns associated with data in AMOR. Specifically, when inputting data into the AMOR system for unmet marine requests, OAM staff left the data field blank that identified the customer making the request in over 90 percent of the cases in fiscal year 2010. OAM reported that they are replacing the AMOR system with a web- based system, which officials said will not allow users to leave important fields blank. Officials also said they are strengthening other internal controls—such as training and supervisory review of data entry—to ensure complete and accurate reporting. Such actions, if implemented effectively, should help improve the reliability of marine customer data— as well as other air and marine operations data—maintained in OAM’s system. OAM was unable to meet 12 percent, or 1,176 of the 9,913 marine support requests they received in fiscal year 2010. OAM officials said one reason that the percentage of support requests met was higher for marine support than for air support is because the requirements for launching aircraft are more stringent than for launching marine vessels, due to the relative risk of failure. The primary reason for unmet marine requests was adverse weather (6 percent of total requests),with an additional 4 percent due to other mission priorities and crew unavailability, as shown in figure 7. According to our survey of 27 OAM marine units, respondents reported they were generally satisfied with the type and number of vessels at their location. However, OAM Regional Directors and field location officials cited limitations, such as the lack of platform class vessels to perform undercover operations and funding for fuel. In addition, survey respondents and field officials cited shortages in personnel. Lastly, OAM has taken actions to increase its ability to meet marine requests, including purchasing “all-weather” vessels and cold-weather marine gear. Our survey of 27 OAM marine locations across the northern, southwest, and southeast regions found that respondents were generally satisfied with the type and number of OAM marine vessels they had at their locations to perform various mission activities. For example, greater than 21 of 27 respondents reported that they were satisfied with both the type and number of vessels they had to perform radar patrol and interdiction missions. Of the remaining 10 activities we asked about, the majority of respondents expressed satisfaction toward the type and number of vessels they had to perform in 7 activities. The activity where respondents expressed the greatest dissatisfaction with the type and number of vessels they had was undercover support—with 12 of the 24 marine units that perform undercover support expressing dissatisfaction with the type of vessels, and 10 of the 24 units expressing dissatisfaction with the number of vessels. See appendix IV for a summary of survey results by location for satisfaction with the type and number of assets provided by mission activity. OAM Regional Directors expressed differing levels of satisfaction with the type and number of marine vessels in their regions. The OAM Northern Regional Director said the northern region had the appropriate number and type of vessels to meet mission needs. Although the Southeast Regional Director said the southeast region had the appropriate number of interceptor vessels to meet mission needs, he also said the southeast region needed two other types of vessels to increase mission capability. The Southwest Regional Director said that given the region’s distribution of personnel, it had the appropriate number of assets; however, he said the region did not have the appropriate number of qualified marine personnel to meet mission needs. Field officials at locations we visited in the northern, southeast, and southwest regions expressed varied levels of satisfaction with OAM’s marine support and capabilities. For example, while Border Patrol and ICE officials in a northern border location said they were satisfied with the marine support they received from OAM, the Director of Marine Operations for an OAM branch in the northern region said that it was not feasible to provide a sufficient number of vessels and crew to ensure full coverage of the maritime border, and that the greatest need was for marine radar to queue marine assets to perform interdictions. An OAM branch official from the southeast region said that while the number and type of vessels met their needs, for a period of time, they could use their vessels only about half of each month due to budget constraints limiting fuel. Finally, officials at an OAM branch in the southwest region told us one of their chief resource needs was platform vessels to perform undercover operations. Our survey of 27 OAM marine units found that the majority of respondents (18 of 27) reported they were either somewhat or very dissatisfied with the extent to which they had adequate personnel to effectively meet mission needs. The OAM Regional Director for the Northern Region said that marine personnel levels across his region were adequate; however, Regional Directors for the Southeast and Southwest Regions cited shortages in marine personnel. Specifically, the Southeast Regional Director said that one southeast branch did not have an adequate number of marine personnel to address increasing threat, and the Southwest Regional Director said one location in the southwest region did not have an appropriate number of personnel to meet mission needs. OAM officials at field locations reported shortages of personnel. For example, an official at one OAM marine unit in the northern region said that sometimes the lack of marine personnel affects operational readiness and that allowing for training and leave are consistently concerns. Similarly, OAM officials from a southwest branch said that sufficient numbers of personnel were not always available due to training, sick days, annual leave, and reservists being called to active duty; and an ICE official in a southwest border location agreed that OAM needed additional marine interdiction agents. Lastly, an OAM survey respondent from a marine unit in the southeast region said that although marine staffing was increased in the past few years for new locations, the pre-existing locations were short on manpower and a realignment of personnel was needed. OAM headquarters officials reported that they have taken actions to address capability gaps due to adverse weather. For example, OAM officials told us that they purchased “all-weather” vessels with enclosed cabins, and that along with additional vessels acquired from USCG, they will have sufficient assets to meet mission needs. Officials said that while enclosed cabins do not enable OAM to launch in rough sea states, they do enable marine agents to operate in cold weather. They said that while larger vessels could reduce the impact of adverse weather on marine operations, these vessels would not be capable of achieving sufficient speeds to conduct interdictions or, if they were capable of maintaining sufficient speeds, would be cost prohibitive.said they purchased marine dry suits and cold weather gear to further address their ability to operate in adverse weather. In addition, OAM officials In regards to personnel, OAM officials told us that with the rapid growth in the marine program during fiscal years 2008 and 2009, OAM will be able to meet its immediate needs for marine agents, but some of those hired were still in the process of being trained and certified. OAM headquarters officials said unmet requests due to other mission priorities are often the result of exigent and unanticipated requests for marine support that are outside of the normal mission-tasking process, and that they continually evaluate the need to re-assign marine assets to meet evolving mission needs. OAM has not documented its analyses to support its resource mix and placement decisions across locations, and challenges in providing higher rates of support to high priority sectors indicate that a reassessment of its asset mix and placement may provide benefits. OAM action to document analyses behind its deployment decisions and reassess where its assets are deployed using performance results could better ensure transparency and help provide reasonable assurance that OAM is most effectively allocating its scarce resources to respond to mission needs and threats. OAM could also improve public accountability by disclosing data limitations that hinder the accuracy of OAM’s reported performance results for fiscal year 2011. OAM has not documented significant events, such as its analyses to support its asset mix and placement across locations, and as a result, lacks a record to help demonstrate that its decisions to allocate resources are the most effective ones in fulfilling customer needs and addressing threats. To help ensure accountability over an agency’s resource decisions, Standards for Internal Control in the Federal Government call for agencies to ensure that all significant events be clearly documented and readily available for examination. OAM issued a National Strategic Plan in 2007 that included a 10-year plan for national asset acquisitions, and a strategic plan briefing the same year that outlined strategic end- states for air assets and personnel across OAM branches.documents included strategic goals, mission responsibilities, and threat information, we could not identify the underlying analyses used to link these factors to the mix and placement of resources across locations. The 2010 update to the strategic plan stated that OAM utilized its forces in areas where they would pay the “highest operational dividends,” but OAM did not have documentation of how operational dividends were determined or analyzed to support deployment decisions. Furthermore, while OAM’s Fiscal Year 2010 Aircraft Deployment Plan stated that OAM deployed aircraft and maritime vessels to ensure its forces were While these positioned to best meet the needs of CBP field commanders and respond to the latest intelligence on emerging threats, OAM did not have documentation that clearly linked the deployment decisions in the plan to mission needs or threats. Similarly, OAM did not document analyses supporting the current mix and placement of marine assets across locations. In addition, DHS’s 2005 aviation management directive requires operating entities to use their aircraft in the most cost-effective way to meet requirements. Although OAM officials stated that it factored cost-effectiveness considerations, such as efforts to move similar types of aircraft to the same locations to help reduce maintenance and training costs into its deployment decisions, OAM does not have documentation of analyses it performed to make these decisions. OAM headquarters officials stated that they made deployment decisions during formal discussions and ongoing meetings in close collaboration with Border Patrol, and considered a range of factors such as operational capability, mission priorities, and threats. OAM officials said that while they generally documented final decisions affecting the mix and placement of resources, they did not have the resources to document assessments and analyses to support these decisions. However, such documentation of significant events could help OAM improve the transparency of its resource allocation decisions to help demonstrate the effectiveness of these resource decisions in fulfilling its mission needs and addressing threats. OAM did not meet its national air support goal and did not provide higher rates of support to locations Border Patrol identified as high priority, which indicates that a reassessment of OAM’s resource mix and placement could help ensure that it meets mission needs, addresses threats, and mitigates risk. According to DHS’s Annual Performance Report for fiscal years 2008 through 2010, the primary and most important measure for OAM is its capability to launch an aircraft when a request is made for aerial support. In addition, DHS’s May 2010 policy for integrated risk management stated that components should use risk information and analysis to inform decision making, and a key component of risk management is measuring and reassessing effectiveness. OAM assessed its effectiveness through a performance goal to meet greater than 95 percent of Border Patrol requests for air support in fiscal year 2010, excluding unmet requests due to adverse weather or other factors OAM considered outside of its control. Our analysis showed that OAM met 82 percent of the 22,877 Border Patrol air support requests in fiscal year 2010. While OAM officials stated that this goal does not apply to specific locations, we used their stated performance measure methodology to determine support rates across Border Patrol sectors and found that they ranged from 54 to 100 percent in fiscal year 2010, and that OAM did not provide higher rates of support to locations Border Patrol identified as high priority (see table 3). This occurred at both the regional and sector levels. For example, while the southwest border was Border Patrol’s highest priority for resources in fiscal year 2010, it did not receive a higher rate of OAM air support (80 percent) than the northern border (85 percent). At the sector level, while Border Patrol officials stated that one sector was a high priority based on the relative threat of cross-border smuggling, our analysis showed that the sector had the fifth highest support rate across all nine sectors on the southwest border. Findings were similar on the northern border, where the Border Patrol’s and OAM’s 2007 Northern Border Resource Deployment Implementation Plan prioritized four sectors based on potential terrorist threats.found that two high-priority northern border sectors had lower support rates than three other sectors in the region that were not designated as high-priority. OAM headquarters officials said that they did not use support rate performance results to assess whether the mix and placement of resources is appropriate. OAM officials stated that they managed operations by allocating assets, personnel, and flight hours across locations, but these factors do not assess the outcomes of their operations, specifically the extent to which OAM provided air and marine support when requested to meet mission needs and address threats. GAO, Executive Guide: Effectively Implementing the Government Performance and Results Act, GAO/GGD-96-118 (Washington, D.C.: June 1996), and GAO/AIMD-00-21.3.1. they will begin to replace the AMOR system in March 2012.headquarters officials expect that the new information system will be more reliable, user-friendly, and have more robust reporting capabilities; however, officials stated that they did not have plans to change how they will use these capabilities to inform resource mix and placement decisions. OAM officials stated that while they deployed a majority of resources to high-priority sectors, budgetary constraints, other national priorities, and the need to maintain presence across border locations limited overall increases in resources or the amount of resources they could redeploy from lower-priority sectors. For example, in fiscal year 2010, 50 percent of OAM’s assets and 59 percent of OAM’s flight hours were in the southwest border, Border Patrol’s highest-priority region. While we recognize OAM’s resource constraints, the agency does not have documentation of analyses assessing the impact of these constraints and whether actions could be taken to improve the mix and placement of resources within them. Thus, it is unclear the extent to which the current deployment of OAM assets and personnel, including those assigned to the Southwest border as cited above, most effectively utilizes its constrained resources to meet mission needs and address threats. Looking toward the future, Border Patrol, CBP, and DHS have strategic and technological initiatives under way that will likely affect customer requirements for air and marine support and the mix and placement of resources across locations. Border Patrol officials stated that they are transitioning to a new risk-based approach and Border Patrol National Strategy in fiscal year 2012 that would likely affect the type and level of OAM support across locations. Border Patrol officials said that the new strategy would likely rely more heavily on intelligence, surveillance, and reconnaissance capabilities to detect illegal activity and increased rapid mobility capabilities to respond to changing threats along the border. OAM headquarters officials said that they have received a high-level briefing of the anticipated changes in June 2011, but have not yet received information necessary to incorporate these changes into its current mix and placement of air and marine resources. CBP and DHS also have ongoing interagency efforts under way to increase air and marine domain awareness across U.S. borders through deployment of technology that may decrease Border Patrol’s use of OAM assets for air and marine domain awareness. Border Patrol officials in one sector, for example, stated that they prefer deployment of technology to detect illegal air and marine activity; OAM officials there said that air patrols are used due to the lack of ground-based radar technology. OAM officials stated that they will consider how technology capabilities affect the mix and placement of air and marine resources once such technology has been deployed. OAM’s fiscal year 2010 aircraft deployment plan stated that OAM deployed aircraft and maritime vessels to ensure its forces were positioned to best meet the needs of CBP Field Commanders and respond to emerging threats; however, our analysis indicates that OAM did not provide higher rates of air support in response to customer need in locations designated as high priority based on threats. In addition, as discussed, OAM did not use performance results to assess the mix and placement of resources. Standards for Internal Control in the Federal Government stresses the need for agencies to provide reasonable assurance of the effectiveness and efficiency of operations, including the use of the entity’s resources. As such, to the extent that the benefits outweigh the costs, reassessing the mix and placement of its assets and personnel, and using performance results to inform these decisions could help provide OAM with reasonable assurance that it is most effectively allocating its scarce resources and aligning them to fulfill its mission needs and related threats. OAM officials continue to use performance data from its AMOR system to meet requirements of the Government Performance and Results Act (GPRA), but have not disclosed limitations affecting the accuracy of these data reported to Congress and the public in CBP’s Performance and Accountability Report. OAM inaccurately reported its performance results from fiscal years 2007 to 2010. OAM headquarters officials stated that they were not aware that they had calculated their performance results inaccurately—due to limitations with AMOR reporting functions— before we brought it to their attention in July 2010. In fiscal year 2010, for example, OAM reported that it exceeded its performance goal and met Border Patrol support requests greater than 98 percent of the time, but the actual rate of support based on our subsequent analysis was 82 percent. After we informed them of the error, OAM officials stated they plan to use the same methodology for calculating GPRA performance results in fiscal year 2011 because they plan to continue to generate the results from the AMOR system. Thus, OAM’s performance results will continue to be calculated and reported inaccurately. The GPRA Modernization Act of 2010 requires that agencies identify (1) the level of accuracy required for the intended use of the data that measures progress toward performance goals and (2) any limitations to the data at the required level of accuracy. Disclosure of the data limitations relating to the accuracy of OAM’s reported performance results for fiscal year 2011 could help improve transparency for achieving program results and provide more objective information on the relative effectiveness of the program, as intended by GPRA. This is also important because, if a performance goal is not met, GPRA, as amended, requires agencies to explain why the goal was not met and present plans and schedules for OAM headquarters officials initially stated that its achieving the goal.new information system will allow OAM to calculate and analyze performance results starting in fiscal year 2012; however, this may not be possible due to the technical problems that have delayed its implementation to March 2012. OAM and USCG officials we surveyed across proximately located air and marine units reported varying levels of coordination across missions, activities, or resources and that to different extents, the coordination that occurred between the agencies was effective and resulted in reduced duplication and cost savings. However, OAM and USCG officials identified one or more areas where improved coordination was needed, and several officials identified opportunities to colocate facilities that, if implemented, could achieve cost savings. DHS oversight to maximize interagency coordination across locations could better ensure the most efficient use of resources for mission accomplishment. Our survey showed that the extent of coordination between OAM and USCG air and marine units varied by mission activity. We surveyed officials from 86 OAM and USCG air and marine units that were proximately located about the frequency of interagency coordination across five mission-related and four mission support activities.cited a multilayered approach to border security which relies on close coordination with partner agencies to reduce reliance on any single point or program that could be compromised and extends the zone of security. Across mission-related activities, 54 percent of responding units reported sharing intelligence on a frequent basis and 43 percent reported sharing schedules, on a frequent basis. For example, personnel from USCG, Department of Defense, and Federal Aviation Administration are assigned to OAM’s Air and Marine Operations Center to facilitate interagency CBP has coordination.mission activities, such as prioritizing missions (22 percent) and dividing up mission assignments (20 percent), as shown in figure 8. OAM and USCG headquarters officials told us that a number of factors may affect the opportunities and frequency of interagency coordination including the extent that there is overlap between agency missions and geographic areas of responsibility. For detailed survey results, see appendix II. The limited resources that OAM has to provide support to OBP, ICE, and other customers highlights the importance of effectively assessing the extent to which the mix and placement of OAM resources best meets competing needs and addresses threats across locations and documenting analyses to support those decisions. While OAM has developed strategic and deployment plans, it did not document analyses that clearly linked such factors as threats and mission needs to its resources deployment decisions. Further, while OAM has taken actions that could increase its ability to meet support requests, our analysis indicates potential issues with the mix and placement of resources, such as challenges in meeting its support goal and lower support rates in locations identified as high priority based on threats. As such, documenting analyses to support decisions regarding the mix and placement of OAM assets and personnel could help improve transparency of OAM’s resource decisions. Moreover, to the extent that the benefits outweigh the costs, taking action to ensure reassessment of the mix and placement of its assets could help provide OAM with reasonable assurance that it is most effectively allocating its scarce resources and aligning them to fulfill its mission needs and related threats. Furthermore, while OAM has established a performance measure to assess support provided to its customers, OAM did not disclose data limitations relating to the accuracy of its reported performance results for support provided. Such disclosure could help improve transparency for achieving program results and provide more objective informative on the relative effectiveness of the program. With regard to coordination, survey respondents reported that coordination that occurred between OAM and USCG, such as intelligence sharing, was effective and resulted in reduced duplication and cost savings. However, our survey and interviews also highlighted activities where additional coordination could help leverage existing resources, eliminate unnecessary duplication and enhance operational efficiencies, including an assessment of whether proximate OAM and USCG units should be colocated. Thus, DHS could benefit from assessing actions it could take to improve coordination across a range of air and marine activities, including reconstituting the DHS Aviation Management Council and Marine Vessel Management Council. To help ensure that OAM assets and personnel are best positioned to effectively meet mission needs and address threats, and improve transparency in allocating scarce resources, we recommend that the Commissioner of U.S. Customs and Border Protection take the following three actions: document analyses, including mission requirements and threats, that support decisions on the mix and placement of OAM’s air and marine resources; to the extent that benefits outweigh the costs, reassess the mix and placement of OAM’s air and marine resources to include mission requirements, performance results, and anticipated CBP strategic and technological changes; and disclose data limitations relating to the accuracy of OAM’s reported performance results for support provided. To help DHS to better leverage existing resources, eliminate unnecessary duplication and enhance efficiencies, we further recommend that the DHS Deputy Secretary assess the feasibility of actions that could be taken to improve coordination across a range of air and marine activities, including reconstituting the DHS Aviation Management Council and Marine Vessel Management Council. Areas under consideration for increased coordination could include the colocation of proximate OAM and USCG units and the five activities identified by officials as resulting in cost savings, including sharing intelligence, dividing up responsibilities for missions, advance sharing of mission schedules, joint training, and logistics. We provided a draft of this report to DHS and DOD for their review and comment. DOD did not comment on the report, but DHS provided written comments which are reprinted in Appendix V. In commenting on the draft report, DHS concurred with the recommendations and described actions underway or planned to address them. While DHS did not take issue with the recommendations, DHS provided details in its response that merit additional discussion in two areas. In its letter, DHS states that additional context regarding CBP’s processes and documentation was necessary to provide a more balanced assessment of the manner in which OAM allocates scarce resources in support of its air and marine asset deployment and describes the historical development of OAM as well as its processes for allocating resources. We believe that the report presents appropriate context, balanced and fair analyses of the allocation of OAM personnel and flight hours using OAM’s data, and measures OAM’s performance results using its primary and most important performance measure for fiscal year 2010—OAM’s capability to launch an aircraft when a request is made for support. In addition, in commenting on the draft report, DHS also states CBP was unable to verify or duplicate GAO’s analysis of fiscal year 2010 data from TECS, but was taking steps to confirm actual figures. As the report states, we worked closely with OAM system officials to extract the underlying data from the AMOR system and discussed our preliminary analyses with OAM officials along with the methodology we used in calculating OAM’s performance results. OAM officials stated that they could not duplicate our analyses due to limitations with AMOR’s reporting capabilities. DHS states that OAM has coordinated with the Office of Information and Technology to develop and test a TECS report following a methodology that will accurately report performance results within 60 days. In regard to the recommendation that CBP document analyses, including mission requirements and threats, that support decision on the mix and placement of OAM’s air and marine assets, DHS concurred. DHS stated that CBP is finalizing its Fiscal Year 2012-2013 Aircraft Deployment Plan and that in the next iteration of this plan, which CBP plans to initiate in the third quarter of fiscal year 2013; CBP will provide additional documentation of its analysis supporting decision of the mix and placement of air and marine resources, including mission requirements and threats. Such actions should increase transparency and demonstrate that resource deployment decisions are responsive to customer need and threat. DHS also concurred with the recommendation to reassess the mix and placement of OAM’s air and marine resources to include mission requirements, performance results, and anticipated CBP strategic and technological changes to the extent that the benefits outweigh the costs stating that it planned to complete such actions as part of the next iteration of the Aircraft Deployment Plan. Further, DHS states that based on budgetary forecasts, OAM expects that its budget will continue to decrease and that as a result, OAM will meet a lower percentage of requests for air support in coming years. We acknowledge these concerns and believe that a reassessment of the right mix and placement of resources is particularly important in a constrained budgetary environment and should provide reasonable assurance that it is most effectively allocating its scarce resources and aligning them to fulfill its mission needs and related threats. Regarding the recommendation to disclose data limitations relating to the accuracy of OAM’s reported performance results for support provided, DHS concurred. It also reported that CBP is modifying its performance measure beginning with the reporting of fiscal year 2011 results and plans to disclose applicable data limitations related to performance results. Such actions should improve transparency for achieving program results and provide more objective information on the relative effectiveness of the program. In regard to the recommendation that DHS assess the feasibility of actions it could take to improve coordination across a range of air and marine activities, including reconstituting the DHS Aviation Management Council and Marine Vessel Management Council, DHS concurred and described multiple initiatives it had underway to improve coordination across air and marine activities. Such activities included DHS meetings between CBP and USCG aviation officials to explore options for joint acquisitions, colocation, air operations, and aviation governance; and a cost-benefit assessment analyzing potential efficiencies with DHS aviation activities including maintenance, training, and ground handling equipment. DHS also identified coordination efforts of its component-level Boat Commodity Council to transfer used vessels from USCG to CBP. DHS discussed attendance at a January 2012 interagency meeting hosted by CBP that discussed helicopter and marine vessel acquisitions, the P-3 aircraft Service Life Extension Program, potential opportunities for consolidation of facilities and locations of new support units and the Fiscal Year 2012-2013 Aircraft Deployment Plan. While these are positive initial steps and could help improve coordination, we continue to believe that it will be important for DHS to assess the feasibility of actions to further improve coordination of air and marine activities on a more permanent basis, such as reconstituting the DHS Aviation Management Council and Marine Vessel Management Council, among other possible actions. DHS also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Secretary of Defense, and interested congressional committees as appropriate. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix VI. This report addresses the extent that the U.S. Customs and Border Protection (CBP) has the right mix of air and marine assets in the right locations to meet customer needs, and effectively coordinated with the U.S. Coast Guard (USCG). Specifically, we reviewed the extent that the Office of Air and Marine (OAM): (1) met air and marine support requests across locations, customers, and missions, (2) has taken steps to ensure that its mix and placement of resources met its mission needs and addressed threats, and (3) coordinated the operational use of its air and marine assets and personnel with the USCG. For all three objectives, we collected and analyzed relevant operational documents; annual reports; cooperation agreements and memoranda among federal agencies; budget information; and other relevant information issued by the Department of Homeland Security (DHS), DHS’s Program Analysis and Evaluation office, CBP’s Office of Border Patrol and OAM, U.S. Immigration and Customs Enforcement (ICE), USCG, and the Department of Defense (DOD). We also collected relevant information, data and documentation, such as cooperative agreements between local agencies, at each of the site visits. We also interviewed officials from DHS’s Program Analysis and Evaluation office, Division of the Office of the Chief Financial Officer, as well as headquarters officials from CBP, OAM, Border Patrol, ICE, and USCG. In addition, we met with DOD officials responsible for programs intended to enhance maritime and air domain awareness and obtained relevant reports and documents on these efforts. We also reviewed past GAO reports and DHS studies discussing opportunities for increased coordination and discussed ongoing DHS efforts to increase oversight over air and marine assets with officials from DHS’s Chief Administrative Officer. We also conducted a site visit to OAM’s Air and Marine operations Center at Riverside, California where we interviewed officials and were provided a briefing on the Air and Marine Operations Center operations, including a tour of the center. We conducted site visits to 4 of the 23 OAM branch offices, including air and marine units associated with those branches. At the site visits, we conducted semi-structured interviews with personnel from OAM operational air and marine units, USCG, ICE, and the Border Patrol, as well as some local law enforcement officials (OAM marine and the USCG are not present at one location we visited). We selected these 4 locations because they illustrated OAM operations at both the northern and southern U.S. borders, a mix of threats (terrorism, drug smuggling, and illegal immigration), operating environments for air (desert, forest, urban and rural), as well as marine operations along the coasts, on the Great Lakes, and, in the case of a southeast location, interactions with the Joint Interagency Task Force-South (JIATF-S) at Key West, Florida. All 4 also provide support for ICE and Border Patrol operations in the interior of the country. In addition, the 4 sites provided coverage in terms of the three geographic regions into which OAM units are divided administratively (southwest, southeast, northern). Three of the 4 sites include both OAM and USCG entities with air and/or marine assets in close geographic proximity, and the agencies use an array of air and marine assets under varying operational conditions. We also interviewed officials from JIATF-S to obtain information on that location’s coordinated operations covering parts of the Gulf of Mexico, the straits of Florida, the Caribbean and the Central and South America transit zone for illegal smuggling of persons and contraband. To address objectives 1 and 2, we obtained performance data for fiscal year 2010 covering the time period of October 1, 2007, through September 30, 2010, from OAM’s system of record—the Air and Marine Operations Reporting System (AMOR)—which is a module in ICE’s Case Management System, which is in turn part of TECS, a legacy DHS system. This performance data primarily included the number of air and marine support requests that were met and not met, and the reasons why the requests were not met. Due to the lack of (1) documentation as to the number and identity of the AMOR tables, (2) the keys required to join them, (3) the business rules required to use the data correctly, and (4) AMOR subject matter experts, we were unable to obtain copies of the AMOR data files. Instead, we obtained copies of the temporary data extract files produced when individual reports are requested and produced by the AMOR system for the following reports: Enforcement Support Report 02: Support Requests by Agency Miscellaneous Report 01: No Launch Activities by Branch Flight Hours Report 06: Flight Hours by Type of Aircraft Flight Hours Report 09: Flight Hours by Mission Service Hours Report 03: Service Hours by Type of Vessel We found that data on unmet air and marine support requests prior to fiscal year 2010 may not have been entered consistently and only used data from fiscal year 2010 in our analysis. For example, at two of the four locations we visited, we found that a number of unmet air support requests were not entered properly prior to fiscal year 2010. We also found that many of the data entries for unmet support requests identifying which agency an activity (e.g., flight) supported were left blank for fiscal year 2010, including 16 percent in support of requests for air enforcement activities and 93 percent in support of requests for marine enforcement activities. In interviews with OAM officials, they said these blank entries represented unmet support requests most likely in support of OAM. Based on these limitations, we did not report unmet support requests by customer for marine activities. We used the 2010 Air data from Enforcement Support Report 02 and Miscellaneous Report 01 to replicate OAM’s performance measure calculation by branch. First, we determined which Miscellaneous Report 01 no launches were in support of Border Patrol (BPL) as follows: Include only no launches where BPL is listed in any of the five in support of codes Exclude the following no launch categories: 39: Canceled by requester 01: Target Legal 03: Lost Target- prior to launch 07: Visual sighting 08: Locate only 11: Insufficient/Inadequate 16: Weather 17: Information not timely 27: Target return to foreign 40: Request did not meet GSA requirements 41: Suspect no show 42: Geographic limitation/Distance too 44: No launch/Ground 45: No launch/NAV violation 46: No country clearance 56: Static display—not operated for display 57: Certificate of Authorization Restrictions We then determined the number of BPL launches from Enforcement Report 02 and calculated the OAM performance measure for BPL support as follows: Total requests = launches + no launches Percentage of support requests met = launches / total requests Finally, we mapped the Border Patrol sectors to the OAM branches as follows: As part of our data reliability assessment, we performed electronic data testing for the data elements in the report extract files that we used; reviewed available system and user documentation, including user guides and data dictionaries; compared totals for the same time periods between similar variables from different reports; and reviewed our preliminary analyses with knowledgeable OAM officials, including the TECS Systems Control Officer. We determined that the AMOR data used in the report were sufficiently reliable for the purposes of this report. To address objectives 1 and 3, we conducted a web-based, self- administered questionnaire survey about coordination and related issues with all OAM air, OAM marine, USCG air and USCG marine units nationwide and in the Caribbean identified as being likely to coordinate with each other by OAM and USCG headquarters. We asked OAM and USCG headquarters points of contact to identify the USCG units that were most likely to be coordinating their operations in some regard with proximately located OAM air and marine units. A total of 86 OAM and USCG units were identified by the headquarters’ points of contact and senior officers from these units were asked to respond. The survey questions, although nearly identical, were tailored specifically to each type of unit—OAM air, OAM marine, USCG air and USCG marine. OAM air and OAM marine were asked about the sufficiency of their assets to perform certain types of missions; this was not included in the USCG questionnaires, as it was considered outside the scope of the engagement. The survey questions and summary results are included in appendix II. The questionnaire was pre-tested with two OAM air units and two OAM marine units. In addition, draft versions were reviewed by cognizant OAM and USCG headquarters’ personnel, and by a survey methodologist at GAO. We made adjustments to question wording and order based on pre-test results and review comments we received. The survey was conducted using a self-administered questionnaire posted on the web. We contacted intended recipients via e-mail before the survey to establish that the correct respondent had been identified, and later with passwords and links to the questionnaire. We made follow- up contacts with nonrespondents by e-mail and phone throughout the field period. Headquarters (USCG and OAM) points of contact were also sent email reminders to those not yet responding. The survey data were collected from May 4 through May 24, 2011. We received completed questionnaires from all the recipients, for a 100 percent unit-level response rate, although not all units answered each question in the survey. Table 5 below shows the proximately located OAM and USCG air and marine units to which the survey was sent. We conducted this performance audit from June 2010 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The questions we asked in our survey of OAM and USCG air and marine units are shown below. Our survey was comprised of closed-ended and open-ended questions. In this appendix, we include all the survey questions and aggregate results of responses to the closed-ended questions; we do not provide responses to the open-ended questions for ease of reporting. The tables of aggregated response totals to each question are broken down by branch and type of unit. Not all eligible respondents answered each question. Questions 16, 17, and 18 were included only in the OAM surveys. For a more detailed discussion of our survey methodology see appendix I. Survey of Coordination of Air Operations and Assets at OAM/USCG Locations U.S. Government Accountability Office The U.S. Government Accountability Office (GAO) is reviewing the assets and operations of CBP's Office of Air and Marine (OAM). As part of this effort, GAO is reviewing the coordination between OAM and the U.S. Coast Guard (USCG). This questionnaire gathers information on coordination-related issues regarding air missions (including air patrols, interdiction of contraband or other illegal activities, surveillance, etc.), air-related training, determining air asset requirements, and the extent to which you have the appropriate resources for mission activities. If you would like to see or print the questionnaire before completing it online, click here to open. You will need Adobe Acrobat Reader to view this. If you do not have this program, click here to download this software. About You and Your Location Question 1: Who is the person primarily responsible for completing this questionnaire whom we can contact in case we need to clarify a response? Enter text or numbers in each of the spaces below. (e.g., Great Lakes Air and Marine Branch) Coordination of Air/Maritime Mission Activities Question 2: We realize that different OAM locations may have varying needs for coordination with the USCG unit there or nearby, and may not need to coordinate if operating areas and activities do not overlap. The next two questions ask whether your unit participates in any formal or informal entities intended to enhance or promote coordination, and in what specific ways, if any, it coordinates with the USCG. IF OTHER: Question 4: IF ANY AIR/MARITIME MISSION COORDINATION TAKES PLACE: What is the one USCG/OAM unit with which your unit has the most coordination? Please enter approximate distance between your unit and the coordinating unit as a whole number of miles. IF NO AIR/MARITIME MISSION COORDINATION IN QUESTIONS 2 AND 3: Click the link below to skip to question 13, the next applicable question. (If you do coordinate, continue with next page.) Click here to skip to Question 13 Question 6A: Are any of the following types of written guidance (including policies, agreements, MOUs) used to govern, guide or carry out any coordination prior to air/maritime missions between OAM and USCG at or near your location? Please click yes or no for each type. agreements - Used? USCG guidance - Used? MOU - Used? Other guidance - Used? Question 6B: If yes, how helpful are they to furthering coordination on air/maritime missions? For those used, please additionally click one "helpfulness" button. [Table II.6 Answers to Survey Question 6] Question 7: IF YES TO ANY GUIDANCE: If an electronic copy of the guidance is available, please upload that file(s) by browsing to its location on your computer, using the box below. Please only upload files under 2Mb in size. [Open-ended answers not displayed] Question 13: In your opinion, should there be more, less, or about the same amount or frequency of coordination on air/maritime missions, activities, or resources between OAM and USCG at or near your location in each of the following ways? If there is currently no coordination in a particular way, and that is the appropriate level, click "About the same" for that row. IF OTHER: [Open-ended answers not displayed] Question 18: Overall, considering the number, availability, and qualifications of personnel at your location, how satisfied or dissatisfied are you with the extent to which you have adequate personnel to effectively meet mission needs? [Table II.15 Answers to Survey Question 18] Question 19: Do you have any additional explanations of your answers or comments on any of the issues in this questionnaire? [Open-ended answers not displayed] Question 20: Are you done with this questionnaire? Clicking "Yes" below tells GAO that your answers are final. We will not use your answers unless the "Yes" button is checked when you last exit the questionnaire. Figure 14 displays the number of air and marine assets assigned to OAM’s regions, which include its 23 branches and 6 National Air Security Operations Centers (NASOCs). In this appendix, survey responses from questions 16 and 17 are presented. Only Office of Air and Marine (OAM) air and marine units were surveyed about their satisfaction with aircraft and marine vessels (USCG was not) respectively. Not all eligible respondents answered all parts of each question. Respondents who did not report performing a specific type of mission or who answered “don’t know” to a question about that type of mission were not included in the response counts. For a more detailed discussion of our survey methodology see appendix I and for complete survey responses, see appendix II. Rebecca Gambler, (202) 512-8777 or [email protected]. In addition to the contact named above, Cindy Ayers (Assistant Director), Chuck Bausell, Alexander Beata, Richard D. Brown, Frances A. Cook, Jeff R. Jensen, Nancy Kawahara, Stanley Kostyla, Linda S. Miller, Carl M. Ramirez, Richard M. Stana, Clarence Tull, Jonathan Tumin, and Johanna Wong, made significant contributions to this report.
|
Within DHS, the U.S. Customs and Border Protections (CBP) OAM deploys the largest law enforcement air force in the world. In support of homeland security missions, OAM provides aircraft, vessels, and crew at the request of the its customers, primarily Border Patrol, which is responsible for enforcing border security, and tracks its ability to meet requests. GAO was asked to determine the extent to which OAM (1) met its customers requests; (2) has taken steps to ensure its mix and placement of resources effectively met mission needs and addressed threats; and (3) coordinated the use of its assets with the USCG, which is to execute its maritime security mission using its assets. GAO reviewed DHS policies, interviewed OAM, Border Patrol, U.S. Immigration and Customs Enforcement, and USCG officials in headquarters and in 4 field locations selected on factors, such as threats and operating environments. Results from these field visits are not generalizable. GAO analyzed OAM support request data for fiscal year 2010, and surveyed OAM and USCG officials at 86 proximately located units to determine the extent of cooperation between the two agencies. This report is a public version of a law enforcement sensitive report GAO issued in February 2012. Information deemed sensitive has been redacted. GAOs analysis of the Office of Air and Marine (OAM) data found that OAM met 73 percent of the 38,662 air support requests and 88 percent of the 9,913 marine support requests received in fiscal year 2010. The level of support differed by location, customers, and type of mission. For example, in its northern region OAM met air support requests 77 percent of the time and in its southeast region, it met these requests 60 percent of the time. The main reasons for unmet air and marine support requests were maintenance and adverse weather, respectively. OAM has taken actions, such as developing an aircraft modernization plan and purchasing all-weather vessels, to address these issues. OAM could benefit from taking additional steps to better ensure that its mix and placement of resources meets mission needs and addresses threats. GAOs analysis of OAMs fiscal year 2010 performance results indicate that OAM did not meet its national performance goal to fulfill greater than 95 percent of Border Patrol air support requests and did not provide higher rates of support in locations designated as high priority based on threats. For example, one high-priority Border Patrol sector had the fifth highest support rate across all nine sectors on the southwest border. OAM could benefit from reassessing the mix and placement of its assets and personnel, using performance results to inform these decisions. Such a reassessment could help provide OAM with reasonable assurance that it is most effectively allocating scarce resources and aligning them to fulfill mission needs and related threats. Additionally, OAM has not documented its analyses to support its asset mix and placement across locations. For example, OAMs fiscal year 2010 deployment plan stated that OAM deployed aircraft and maritime vessels to ensure that its forces were positioned to best meet field commanders needs and respond to emerging threats, but OAM did not have documentation that clearly linked the deployment decisions in the plan to these goals. Such documentation could improve transparency to help demonstrate the effectiveness of its decisions in meeting mission needs and addressing threats. GAOs analysis of OAM and U.S. Coast Guard (USCG) air and marine survey responses indicated that they coordinated with their proximately located counterparts more frequently for activities directly related to carrying out their respective agencies missions (mission-related activities) than for mission support activities. For example, within mission-related activities, 54 percent of the 86 respondents reported sharing intelligence on a frequent basis and, within mission-support activities, about 15 percent reported that they frequently coordinated for maintenance requests. Survey respondents, the Department of Homeland Security (DHS) analyses, and GAO site visits confirm that opportunities exist to improve certain types of coordination, such as colocating proximate OAM and USCG units, which currently share some marine and no aviation facilities. In addition, DHS does not have an active program office dedicated to the coordination of aviation or maritime issues. DHS could benefit from assessing actions it could take to improve coordination across a range of air and marine activities, including reconstituting departmental oversight councils, to better leverage existing resources, eliminate unnecessary duplication, and enhance efficiencies. GAO recommends, among other things, that CBP reassess decisions and document its analyses for its asset mix and placement, and that DHS enhance oversight to ensure effective coordination of OAM and USCG resources, and DHS concurred.
|
TEA-21 authorized a total of $36 billion in “guaranteed” funding through fiscal year 2003 for a variety of transit programs, including financial assistance to states and localities to develop, operate, and maintain transit systems. Under one of these programs, the New Starts program, FTA identifies and funds worthy fixed guideway transit projects, including heavy, light, and commuter rail, ferry, and certain bus projects (such as bus rapid transit). FTA funds New Starts projects through full funding grant agreements (FFGA), which establish the terms and conditions for federal participation in a project. By statute, the federal funding share of a New Starts project cannot exceed 80 percent of its net cost. To obtain a FFGA, a project must progress through a regional review of alternatives and meet a number of federal requirements, including providing data for the New Starts evaluation and ratings process. Projects presented to FTA for evaluation go through a lengthy process from planning to preliminary engineering and final design, which may culminate in a FFGA and the actual construction phase. FTA conducts management oversight of projects from the preliminary engineering stage through construction. All projects that do not have an existing or pending FFGA and are in preliminary engineering or final design are considered to be in the New Starts pipeline. There are currently 52 projects in the pipeline. Figure 1 illustrates the overall planning and project development process for New Starts projects. To determine whether a project should receive federal funds, FTA’s New Starts evaluation process assigns ratings based on a variety of financial and project justification criteria and then assigns an overall rating. These criteria are identified in TEA-21 and reflect a broad range of benefits and effects of the proposed projects, such as capital and operating finance plans, mobility improvements, and cost-effectiveness. FTA assigns proposed projects a rating of high, medium-high, medium, low-medium, or low for each criterion. The individual criterion ratings are combined into the summary financial and project justification ratings. On the basis of these two summary ratings, FTA develops the overall project rating using the following decision rules: Highly Recommended requires at least a medium-high for both the financial and project justification summary ratings. Recommended requires at least a medium for both the financial and project justification summary ratings. Not Recommended is assigned to projects not rated at least medium for both the financial and project justification summary ratings. Not Rated indicates that FTA has serious concerns about the information submitted for the mobility improvements and cost- effectiveness criteria because the underlying assumptions used by the project sponsor may have inaccurately represented the benefits of the project. Not Available is the rating given to projects that did not submit complete data to FTA for evaluation for the fiscal year 2004 cycle. Although many projects receive an overall rating of “recommended” or “highly recommended,” only a few are proposed for FFGAs in a given fiscal year. FTA proposes “recommended” or “highly recommended” projects for FFGAs when it believes that the projects will be able to meet certain conditions during the fiscal year that the proposals are made. These conditions include the following: The local contribution to funding for the project must be made available for distribution. The project must be in the final design phase and have progressed to the point where uncertainties about costs, benefits, and impacts (e.g., environmental or financial) are minimized. The project must meet FTA’s tests for readiness and technical capacity. These tests confirm that there are no cost, project scope, or local financial commitment issues remaining. FTA implemented two changes to the New Starts process for fiscal year 2004. First, in response to language contained in a conference report prepared by the House Appropriations Committee, FTA instituted a preference policy in its ratings process favoring current and future projects that do not request more than a 60 percent federal share. Second, FTA revised its cost-effectiveness and mobility improvements criteria by adopting a Transportation System User Benefits (TSUB) measure that gives equal weight to benefits for both new and existing transit system riders. Project sponsors we interviewed endorsed the TSUB measure, but implementing it has been difficult for both FTA and the project sponsors because of the variety of local travel forecasting models that exist and problems with those models. These difficulties resulted in some projects not being rated for the fiscal year 2004 cycle. The New Starts evaluation and ratings process for fiscal year 2004 was generally similar to that of fiscal year 2003, but FTA implemented two changes that are described in its Annual Report on New Starts for Fiscal Year 2004. First, in response to language contained in a conference report prepared by the House Appropriations Committee, FTA instituted a preference policy in its ratings process favoring current and future projects that do not request more than a 60 percent federal share. To achieve this, FTA changed its criterion related to capital finance plans to give projects seeking a federal share greater than 60 percent a “low” financial rating. A “low” financial rating is likely to result in a “not recommended” overall rating. Second, FTA changed the calculation of the cost-effectiveness and mobility improvements criteria by adopting the TSUB measure. The TSUB measure replaced the “cost per new rider” measure that had been used in past ratings cycles. According to FTA, the new TSUB measure reflects an important goal of any major transportation investment—reducing the amount of travel time and out-of-pocket costs that people incur for taking a trip (i.e., the cost of mobility). In contrast to the previous “cost per new rider” measure, the TSUB measure gives equal weight to both new and existing transit system riders by measuring not only the benefits to people who change transportation modes (e.g., highways to transit) but also benefits to existing transit riders and highway users. Figure 2 illustrates the New Starts evaluation and ratings process, including the changes made to the process for fiscal year 2004. The TEA-21 legislation that authorizes the New Starts program states that federal grants are to be made “for 80 percent of the net project cost, unless the grant recipient requests a lower grant percentage.” The legislation further provides that, in evaluating grant applications, FTA shall consider the degree of local financial commitment and the extent to which the local commitment exceeds the minimum nonfederal share of 20 percent. For the fiscal year 2004 cycle, FTA instituted a 60 percent preference policy that ultimately is likely to result in an overall rating of “not recommended” for projects that seek more than a 60 percent federal share. Although TEA-21 authorized FTA to consider local financial commitments that increase the local share of net project cost, and it vested FTA with discretion as to how to achieve this, the Secretary of Transportation is required by law to issue regulations defining the manner in which projects will be evaluated and rated. In December 2000, FTA finalized a regulation that stated that the evaluation and ratings process would consider, among other things, the extent to which projects have a local financial commitment that exceeds the 20 percent minimum. Essentially, this regulation merely restated the TEA-21 statutory criteria. Also, when FTA implemented its 60 percent preference policy, it did not amend its regulations to support the change in policy or its current procedures. By not amending its regulations, which have the full force and effect of law, to reflect this change, FTA has not provided an opportunity for public comment on its new policy. Furthermore, explicitly stating all of FTA’s criteria and procedures in regulations would help to ensure that project sponsors, Metropolitan Planning Organizations, and others involved in considering potential New Starts projects were fully aware of FTA’s preference policy and could make their investment decisions on the basis of a transparent evaluation and ratings process. FTA has stated that in instituting the 60 percent preference policy, it was following congressional direction as expressed in a conference report prepared by the House Appropriations Committee. That report states “the conferees direct FTA not to sign any new full funding grant agreements after September 30, 2002, that have a maximum federal share of higher than 60 percent.” As stated previously, TEA-21 provides FTA with discretion to give priority to projects that have a federal share lower than 80 percent. FTA officials told us that favoring projects with a federal share that does not exceed 60 percent would allow more projects to receive New Starts funding and would help ensure that local governments play a major role in funding such projects. Of the 32 projects that were rated for the fiscal year 2004 cycle, 4 received a “low” financial rating and a “not recommended” overall rating because, among other reasons, they proposed a federal share above 60 percent. According to FTA, since the release of FTA’s Annual Report in February 2003, one of these projects—the San Juan Tren Urbano Minillas Extension project—was withdrawn and the three remaining projects are continuing to address their financial issues. FTA officials expressed the view that reducing the level of federal share to 60 percent has a minimal impact because, over the last 10 years, the federal share for New Starts projects’ grant agreements has averaged around 50 percent and has been trending lower. However, many of the project sponsors we interviewed (7 of the 11) noted that the reduced federal share did, in fact, have an impact on their projects’ schedule and financing, which had to be revised prior to or during the ratings process. FTA’s decision to institute its preference policy for projects that seek no more than a 60 percent federal share may also adversely affect future projects, according to project sponsors that we interviewed, as the following examples illustrate. Six of the 11 project sponsors said that continuing a 60 percent preference policy for the amount of the federal share for projects might reduce the number of future projects because of difficulties faced by local and state governments in providing an increased local share. Transit industry officials we interviewed agreed with this statement. Nine of the 11 project sponsors said that the unequal federal share for highway and transit projects could bias the local decision-making process in favor of highway projects. Highway projects generally receive a federal share of 80 percent or more, in contrast to the current preference policy of a 60 percent federal share for New Starts transit projects. The nine project sponsors we interviewed who were affected by the TSUB measure believed it was an improvement over the previous “cost per new rider” measure because the TSUB measure takes into account a broader set of costs and benefits to the overall transit system. For example, the measure considers mobility benefits related to improved travel time for all users of a transportation corridor, rather than benefits accruing from only new riders. However, many project sponsors encountered difficulties in providing accurate data needed to calculate the new TSUB measure. To implement the TSUB measure, FTA developed a software package, called Summit, to extract certain data from local travel forecasting models that are used in planning transit projects. FTA hired contractors to assist project sponsors in using the Summit software to calculate the TSUB value. During the implementation process, FTA discovered that many of the local travel forecasting models had underlying errors. Some of these errors were significant due to faulty design and assumptions made in some of the local travel forecasting models; others were simple coding errors in the models. As a result, many projects experienced difficulties that prevented them from calculating an acceptable value for the TSUB measure. According to FTA’s Annual Report, 11 of the 32 projects rated for the fiscal year 2004 cycle were identified as being unable to calculate a valid TSUB value. As a result, these projects were “not rated” for the cost- effectiveness criterion. Additionally, 7 of the 9 project sponsors we interviewed who were affected by the TSUB measure encountered difficulties in the measure’s implementation: 5 had difficulty getting their local transit forecasting models to generate the data needed for FTA’s software to calculate the measure, 3 did not have adequate data to develop the measure, and 2 said that FTA did not provide enough documentation about the measure and the software used to calculate the TSUB. As described above, FTA officials told us that they believe the major problem in implementing the TSUB measure stemmed from problems with the underlying local travel forecasting models, not FTA’s software or guidance on the measure. Nonetheless, FTA is taking some steps to address the problems raised in the implementation of the TSUB measure. For example, FTA hired contractors to work with transit sponsors to correct problems with the local travel forecasting models and the software used to calculate the TSUB measure. These contractors provided technical support to all affected project sponsors and assisted some sponsors in correcting the underlying problems identified in their local travel forecasting models. FTA officials also told us that they are continuing to work closely with the 11 project sponsors who were unable to calculate values for the TSUB measure. When the problems in the projects’ local travel forecasting models are corrected and data are resubmitted to FTA for evaluation, FTA plans to re-rate these projects. As soon as a project receives a revised rating, FTA officials told us that they would inform Congress and other appropriate parties. Project sponsors we interviewed told us that they would have benefited from additional guidance and other technical support, such as documentation for the software used to calculate the TSUB measure. They also requested additional opportunities to discuss their concerns and provide input to FTA officials about the measure. FTA officials told us that they are developing software documentation for the TSUB measure and plan to release it in June 2003. Furthermore, FTA has held a series of four roundtable discussions with project sponsors and transit industry officials, specifically on the TSUB measure and its implementation. FTA plans to hold two additional roundtable discussions during fiscal year 2004. FTA officials and a FTA consultant told us that they anticipate that fewer projects will have difficulties calculating accurate TSUB values in future New Starts evaluation and ratings cycles. FTA plans to continue addressing technical problems related to inaccurate local travel forecasting models on a case-by-case basis. FTA officials also acknowledged the need to develop a more systematic approach for dealing with these problems. Of the 52 projects FTA evaluated for the fiscal year 2004 cycle, 32 were rated and 20 were statutorily exempt from the ratings process because they requested less than $25 million in New Starts funding. Figure 3 shows the results of the process for the fiscal year 2004 cycle and how they compare with those of fiscal year 2003, when 50 projects were evaluated. From fiscal years 2003 to 2004, the number of “recommended” projects decreased from 25 to 12, while the number of projects that received a rating of “not recommended” rose from 4 to 11. The primary reasons for these changes were (1) lower financial ratings, which resulted from the inability of some projects to conform to the reduced federal share, and (2) “low” ratings received on the cost-effectiveness and mobility improvements criteria resulting from implementation of the new TSUB measure. In addition, the number of projects that were “not rated” or “not available” rose from 2 to 7, largely due to difficulties project sponsors had in determining a value for the TSUB measure. Following the fiscal year 2004 New Starts evaluation and ratings process, FTA proposed four projects for new federal funding commitments. Inclusion of one of them—the Chicago Ravenswood Line Expansion project—is unusual because FTA assigned it an overall project rating of “not rated” even though, on the basis of FTA’s New Starts regulations, a project must have an overall rating of at least “recommended” to receive a grant agreement. According to FTA officials, this project could not be rated because its local travel forecasting data and models did not support calculation of the new benefits measure. However, the officials told us that they decided to select this project for a proposed grant agreement because they believed that the data problems would be corrected, and the project would be able to achieve a “recommended” rating. Along with the other three proposed projects, FTA officials believe that the Chicago Ravenswood Line Expansion project will be ready for a grant agreement by the end of fiscal year 2004. Officials said that other projects that received overall ratings of “recommended” or “highly recommended” would not be ready at that time. Figure 4 summarizes the ratings of the four proposed projects, which are further described in appendix I. The administration’s fiscal year 2004 budget proposal requests that $1.5 billion be made available for New Starts, a $0.3 billion increase over the fiscal year 2003 level. The budget proposal also contains three initiatives—reducing the federal share to 50 percent, allowing nonfixed guideway projects to be funded through New Starts, and replacing the “exempt” classification with a streamlined ratings process for projects requesting less than $75 million in New Starts funding. The administration’s budget proposal for fiscal year 2004 requests that $1.5 billion be made available for the construction of new transit systems and expansion of existing systems through the New Starts program—an increase of $0.3 billion, or 25 percent over the $1.2 billion appropriated for fiscal year 2003. The commitment authority for fiscal year 2004 and beyond will be addressed in the next surface transportation authorization legislation. Because FTA’s fiscal year 2004 budget proposes that $1.5 billion in commitments be made available for the New Starts program, FTA expects that the new commitment authority adopted in the authorization legislation will, at a minimum, be sufficient to cover this amount. Figure 5 illustrates the specific allocations FTA has requested for fiscal year 2004. It shows that $1.08 billion would be allocated among 21 projects with existing grant $235 million would be allocated among the 4 projects proposed for new FFGAs; $121.2 million would be allocated among other projects in final design and preliminary engineering that do not have existing, pending, or proposed FFGAs (these projects may include those designated by Congress); $55 million would be allocated to 1 project with a pending grant agreement (i.e., the FFGA was proposed in an earlier year, but has not yet been completed); and the remainder of the funds would be allocated to other mandated projects and oversight activities. The administration has proposed that the federal share of New Starts project costs be reduced from the current statutory maximum level of 80 percent to a statutory maximum of 50 percent. The possible advantages of this proposed reduction would be similar to those cited by FTA officials as justification for the 60 percent preference policy—that is, the change may allow FTA to fund additional projects and the local governments sponsoring the projects would be encouraged to provide a greater degree of financial commitment. However, a reduction in the federal share may adversely affect some future projects. Nine of the 11 project sponsors we interviewed were opposed to a reduction of the federal share for projects from the current statutory level of 80 percent to 50 percent. These sponsors said that a reduced federal share may make it more difficult for communities to participate in the New Starts program because they will have to provide an increased local share. It may also affect local decision making because it would make the federal share for transit projects higher than that required for most highway projects, which generally receive a federal share of 80 percent or more. We reported in 2002 that a number of the nation’s leading transportation experts had suggested that federal matching requirements should be equal for all transportation modes to avoid creating incentives for local decision makers to pursue projects in one mode that might be less effective than projects in other modes. However, as we noted earlier, over the past 10 years requests for federal assistance for New Starts projects have averaged around 50 percent and have been trending lower. Another initiative proposed in the administration’s fiscal year 2004 budget proposal would allow certain nonfixed guideway transit projects (e.g., regular or express bus service) to be eligible for New Starts funding. Currently, New Starts projects are exclusively on fixed guideways and occupy a separate right-of-way. According to FTA, the proposal would allow project sponsors to choose the most appropriate mode to serve specific corridors. Three of the 11 project sponsors we interviewed supported the initiative because they believed that it gives local communities greater flexibility when choosing types of transit projects. Seven of the 11 project sponsors we interviewed questioned the need for allowing nonfixed guideway projects into the New Starts process. They were concerned that there would be less emphasis on traditional fixed guideway New Starts projects. Transit industry officials we interviewed shared this concern. Finally, the administration has proposed replacing the “exempt” classification with a streamlined ratings process for projects requesting less than $75 million in New Starts funding. Currently, projects seeking less than $25 million in New Starts funding are exempt from the ratings process and are not evaluated on the same project justification criteria as projects requesting more than $25 million. By eliminating the “exempt” classification and replacing it with a streamlined ratings process for projects requesting less than $75 million, FTA would ensure that all projects receive a rating and are evaluated on the basis of the same criteria. This is a hallmark of performance-oriented evaluation. However, 6 of 11 project sponsors we interviewed opposed eliminating the “exempt” classification. These project sponsors believed that elimination of the “exempt” classification would reduce the number of funding applications from smaller cities because of the cost and time involved in providing the full evaluation data. Figure 6 summarizes the advantages and disadvantages of the three proposed initiatives in the administration’s fiscal year 2004 budget proposal, as expressed by FTA officials and project sponsors we interviewed. Although FTA has the authority to favorably rate proposed projects that request a lower federal share, it also has a responsibility to fully inform all transit agencies of changes that are made to the evaluation and ratings process. Because FTA has not revised its regulations to reflect its 60 percent preference policy, transit sponsors, other members of the transit community, and the public may not be fully aware of FTA’s preference policy and have not had the opportunity to formally comment on it. By revising its regulations to reflect its current policy, FTA would have the opportunity to obtain public comments on its proposed rulemaking, thus increasing the transparency of the agency’s decision-making process and ensuring that the views of affected transit agencies and other interested parties are considered in that process. In its implementation of the Transportation System User Benefits measure, FTA discovered that many local travel forecasting models used by project sponsors in planning New Starts projects were flawed or had difficulty generating the required data. FTA officials considered this to be a major problem and they acknowledged the need for a more systematic way to address the problem across all transit agencies that are current or future New Starts project sponsors. FTA has assisted project sponsors on a case- by-case basis and plans to do so in the future. Additional guidance from FTA on what specific information is required from local travel forecasting models could help transit agencies generate accurate data for the measure. To ensure that the New Starts regulations reflect FTA’s actual evaluation and ratings process and procedures, the Secretary of Transportation should direct the Administrator, FTA, to amend the agency’s regulations governing the level of federal funding share for projects to reflect its current policy. To systematically address the problems with the implementation of the Transportation System User Benefits measure, the Secretary of Transportation should direct the Administrator, FTA, to issue additional guidance to transit agencies describing FTA’s expectations regarding the local travel forecasting models and the specific type of data FTA requires to calculate the measure. We obtained oral comments on a draft of this report from the Department of Transportation. Department officials generally agreed with the information presented in the report and they provided technical clarifications, which we incorporated as appropriate. They concurred with the recommendation about providing guidance on the user benefits measure and said that they will consider the recommendation about amending the regulations related to federal funding share. To describe the changes in the New Starts process, we analyzed information in FTA’s Annual Report on New Starts for Fiscal Year 2004. To identify any issues related to those changes, we interviewed FTA officials and contractors hired by FTA to implement those changes; 11 of the 52 sponsors of fixed guideway transit projects being considered for New Starts funding in fiscal year 2004; Metropolitan Planning Organization (MPO) officials involved in 5 of the projects whose sponsors we interviewed; and transit industry officials, including senior officials at the American Public Transportation Association and the Chair of the New Starts Working Group—an organization of New Starts project sponsors, MPOs, and private transit industry firms, who advocate improvements to the New Starts evaluation and ratings process. To determine how many New Starts projects were evaluated, rated, and proposed for funding in fiscal year 2004, we analyzed information in FTA’s Annual Report and in various budget and financial documents prepared by FTA. To identify proposed funding commitments and initiatives related to New Starts in the administration’s fiscal year 2004 budget proposal—and the challenges they might present for future projects—we reviewed pertinent FTA documents, including its Annual Report and proposed budget, and we interviewed a wide variety of officials affected by the changes. These included the individuals listed above (FTA officials, project sponsors, MPO officials, and transit industry representatives). We conducted our review from March 2003 through June 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to congressional committees with responsibilities for transit issues; the Secretary of Transportation; the Administrator, Federal Transit Administration; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at [email protected]. An additional key GAO contact and contributors to this report are listed in appendix III. The Chicago Transit Authority (CTA) is planning a series of capital improvements to enhance the operation of the Ravenswood heavy rail line, which currently experiences capacity problems through a high- density 9.3-mile corridor. The Ravenswood Line Expansion Project would allow CTA to expand platforms and stations along the existing line to accommodate longer trains. The overall capital cost of the project is estimated at $529.9 million. The federal share requested is $245.5 million (46 percent). At present, this project has been identified as “not rated” due to concerns about some of the information underlying the calculation of the Transportation System User Benefits (TSUB) measure. However, on the basis of work conducted to date, the Federal Transit Administration (FTA) believes that the remaining issues will be resolved in the near future and that an overall project rating of “recommended” is likely to be granted. The Las Vegas Regional Transportation Commission (RTC) is proposing a 2.28-mile Resort Corridor Automated Guideway Transit (elevated monorail) project. The monorail will serve the Las Vegas central business district and the resort corridor along the Las Vegas “strip.” The estimated capital cost for the project is $324.8 million. RTC is seeking $159.7 million (50 percent) in New Starts funding. The Las Vegas Resort Corridor Project received a “high” rating for cost- effectiveness, as demonstrated by its high transit system user benefits. The New York Metropolitan Transit Authority (MTA) is designing a direct access for Long Island Rail Road (LIRR) passengers to a new passenger concourse in Grand Central Station in Midtown Manhattan. The 4-mile, two-station commuter rail extension under the East River will contribute to the overall growth of the nation’s largest commuter rail system. The projected capital cost of the project is $5.3 billion. MTA is requesting $2.6 billion (49 percent) in New Starts funding. LIRR has 162,000 daily riders, and this project will allow them to access the east side of New York by connecting LIRR with Grand Central Station. FTA officials believe that the project will reduce travel time for many riders. The Central Puget Sound Regional Transit Authority (Sound Transit) is proposing a 24-mile Central Link light rail transit line from central Seattle toward, but not connecting to, the Seattle-Tacoma airport. The total capital cost for the project is estimated at $2.5 billion. Sound Transit is expected to seek $500 million (20 percent) in New Starts funding. The Central Link project entered Preliminary Engineering in July 1997 and Final Design in February 2000. FTA originally entered into a full funding grant agreement for the “Seattle Sound Move Corridor” project in January 2001. Congress and the Department of Transportation’s Office of the Inspector General raised significant questions about the project costs and directed Sound Transit to reexamine the entire project to reduce risks and better meet budget limitations. Sound Transit identified the Central Link component of the larger Seattle Sound Move Corridor project as its new minimum operable segment. Las Vegas (Resort Corridor Fixed Guideway) New York (Long Island Railroad Eastside Access) Pittsburgh (North Shore Connector Light Rail Transit) San Francisco (New Central Subway Project) Washington, D.C. (Dulles Corridor Bus Rapid Transit) In addition to the person named above, other key contributors to this report were Alan Belkin, Christine Bonham, R. Stockton Butler, Brandon Haller, Bert Japikse, Ryan Petitte, and David Laverny-Rafter. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
|
Under the Transportation Equity Act for the 21 st Century (TEA-21), Congress authorized federal funding for New Starts fixed guideway transit projects--including rail and bus rapid transit projects that met certain criteria. In response to an annual mandate under TEA-21, GAO assessed the New Starts evaluation and ratings process for the fiscal year 2004 cycle, including (1) changes to the process and any related issues and (2) any challenges related to New Starts initiatives contained in the administration's fiscal year 2004 budget proposal. FTA made two changes to the New Starts evaluation and ratings process for the fiscal year 2004 cycle. First, in response to language contained in a conference report prepared by the House Appropriations Committee, FTA adopted a 60 percent preference policy, which in effect, generally reduced the level of New Starts federal funding share for projects from 80 percent to 60 percent. Because FTA has not revised its program regulations to reflect this change, transit agencies, project sponsors, and the public did not have an opportunity to formally comment on the change. Explicitly stating its criteria and procedures in regulation would allow those involved in considering potential projects to make their investment decisions on the basis of a transparent process. Second, FTA revised some of the criteria used in the ratings process to include a new Transportation System User Benefits measure. Project sponsors GAO interviewed said that the measure was an improvement over the previous benefits measure because it considers benefits to both new and existing transit system riders. However, many project sponsors experienced difficulties in generating a value for the measure for a number of reasons, such as problems with their local forecasting models. FTA officials are working closely with project sponsors to correct these problems, but more guidance may be necessary to avert similar difficulties in the future. The administration's fiscal year 2004 budget proposal requests that $1.5 billion be made available for New Starts for that year, a 25 percent increase over fiscal year 2003. The budget proposal contains three initiatives--reducing the federal share to 50 percent, allowing certain nonfixed guideway projects to be funded through New Starts, and establishing a streamlined ratings process for projects requesting less than $75 million in New Starts funding. These initiatives may allow FTA to fund more projects and give local communities flexibility in choosing among transit modes. However, they may also create challenges for some future transit projects, such as difficulties in generating an increased local funding share or a reduction in the number of smaller communities that will participate in New Starts.
|
As of the end of February 2005, an estimated 827,277 servicemembers had been deployed in support of OIF. Deployed servicemembers, such as those in OIF, are potentially subject to occupational and environmental hazards that can include exposure to harmful levels of environmental contaminants such as industrial toxic chemicals, chemical and biological warfare agents, and radiological and nuclear contaminants. Harmful levels include high-level exposures that result in immediate health effects. Health hazards may also include low-level exposures that could result in delayed or long-term health effects. Occupational and environmental health hazards may include such things as contamination from the past use of a site, from battle damage, from stored stockpiles, from military use of hazardous materials, or from other sources. As a result of numerous investigations that found inadequate data on deployment occupational and environmental exposure to identify the potential causes of unexplained illnesses among veterans who served in the 1991 Persian Gulf War, the federal government increased efforts to identify potential occupational and environmental hazards during deployments. In 1997, a Presidential Review Directive called for a report by the National Science and Technology Council to establish an interagency plan to improve the federal response to the health needs of veterans and their families related to the adverse effects of deployment. The Council published a report that set a goal for the federal government to develop the capability to collect and assess data associated with anticipated exposure during deployments. Additionally, the report called for the maintenance of the capability to identify and link exposure and health data by Social Security number and unit identification code. Also in 1997, Public Law 105-85 included a provision recommending that DOD ensure the deployment of specialized units to theaters of operations to detect and monitor chemical, biological, and similar hazards. The Presidential Review Directive and the public law led to a number of DOD instructions, directives, and memoranda that have guided the collection and reporting of deployment OEHS data. DHSD makes recommendations for DOD-wide policies on OEHS data collection and reporting during deployments to the Office of the Assistant Secretary of Defense for Health Affairs. DHSD is assisted by the Joint Environmental Surveillance Working Group, established in 1997, which serves as a coordinating body to develop and make recommendations for DOD-wide OEHS policy. The working group includes representatives from the Army, Navy, and Air Force OEHS health surveillance centers, the Joint Staff, other DOD entities, and VA. Each service has a health surveillance center—the CHPPM, the Navy Environmental Health Center, and the Air Force Institute for Operational Health—that provides training, technical guidance and assistance, analytical support, and support for preventive medicine units in the theater in order to carry out deployment OEHS activities in accordance with DOD policy. In addition, these centers have developed and adapted military exposure guidelines for deployment using existing national standards for human health exposure limits and technical monitoring procedures (e.g., standards developed by the U.S. Environmental Protection Agency and the National Institute for Occupational Safety and Health) and have worked with other agencies to develop new guidelines when none existed. (See fig. 1.) DOD policies and military service guidelines require that the preventive medicine units of each military service be responsible for collecting and reporting deployment OEHS data. Deployment OEHS data are generally categorized into three types of reports: baseline, routine, or incident- driven. Baseline reports generally include site surveys and assessments of occupational and environmental hazards prior to deployment of servicemembers and initial environmental health site assessments once servicemembers are deployed. Routine reports record the results of regular monitoring of air, water, and soil, and of monitoring for known or possible hazards identified in the baseline assessment. Incident-driven reports document exposure or outbreak investigations. There are no DOD-wide requirements on the specific number or type of OEHS reports that must be created for each deployment location because reports generated for each location reflect the specific occupational and environmental circumstances unique to that location. CHPPM officials said that reports generally reflect deployment OEHS activities that are limited to established sites such as base camps or forward operating bases; an exception is an investigation during an incident outside these locations. Constraints to conducting OEHS outside of bases include risks to servicemembers encountered in combat and limits on the portability of OEHS equipment. In addition, DHSD officials said that preventive medicine units might not be aware of every potential health hazard and therefore might be unable to conduct appropriate OEHS activities. According to DOD policy, various entities must submit their completed OEHS reports to CHPPM during a deployment. The deployed military services have preventive medicine units that submit OEHS reports to their command surgeons, who review all reports and ensure that they are sent to a centralized archive that is maintained by CHPPM. Alternatively, preventive medicine units can be authorized to submit OEHS reports directly to CHPPM for archiving. (See fig. 2.) According to DOD policy, baseline and routine reports should be submitted within 30 days of report completion. Initial incident-driven reports should be submitted within 7 days of an incident or outbreak. Interim and final reports for an incident should be submitted within 7 days of report completion. In addition, the preventive medicine units are required to provide quarterly lists of all completed deployment OEHS reports to the command surgeons. The command surgeons review these lists, merge them, and send CHPPM a quarterly consolidated list of all the deployment OEHS reports it should have received. To assess the completeness of its centralized OEHS archive, CHPPM develops a quarterly summary report that identifies the number of baseline, routine, and incident-driven reports that have been submitted for all bases in a command. This report also summarizes the status of OEHS report submissions by comparing the reports CHPPM receives with the quarterly consolidated lists from the command surgeons that list each of the OEHS reports that have been completed. For OIF, CHPPM is required to provide a quarterly summary report to the commander of U.S. Central Command on the deployed military services’ compliance with deployment OEHS reporting requirements. During deployments, military commanders can use deployment OEHS reports completed and maintained by preventive medicine units to identify occupational and environmental health hazards and to help guide their risk management decision making. Commanders use an operational risk management process to estimate health risks based on both the severity of the risks to servicemembers and the likelihood of encountering the specific hazard. Commanders balance the risk to servicemembers of encountering occupational and environmental health hazards while deployed, even following mitigation efforts, against the need to accomplish specific mission requirements. The operational risk management process, which varies slightly across the services, includes risk assessment, including hazard identification, to describe and measure the potential hazards at a location; risk control and mitigation activities intended to reduce potential exposures; and risk communication efforts to make servicemembers aware of possible exposures, any risks to health that they may pose, the countermeasures to be employed to mitigate exposure or disease outcome, and any necessary medical measures or follow-up required during or after the deployment. Along with health encounter and servicemember location data, archived deployment OEHS reports are needed by researchers to conduct epidemiologic studies on the long-term health issues of deployed servicemembers. These data are needed, for example, by VA, which in 2002 expanded the scope of its health research to include research on the potential long-term health effects on servicemembers in hazardous military deployments. In a letter to the Secretary of Defense in 2003, VA said it was important for DOD to collect adequate health and exposure data from deployed servicemembers to ensure VA’s ability to provide veterans’ health care and disability compensation. VA noted in the letter that much of the controversy over the health problems of veterans who fought in the 1991 Persian Gulf War could have been avoided had more extensive surveillance data been collected. VA asked in the letter that it be allowed access to any unclassified data collected during deployments on the possible exposure of servicemembers to environmental hazards of all kinds. The deployed military services generally have collected and reported OEHS data for OIF, as required by DOD policy. However, the deployed military services have used different OEHS data collection standards and practices, because each service has its own authority to implement broad DOD policies. To increase data collection uniformity, the Joint Environmental Surveillance Working Group has made some progress in devising cross-service standards and practices for some OEHS activities. In addition, the deployed military services have not submitted all of the OEHS reports they have completed for OIF to CHPPM’s centralized archive, as required by DOD policy. However, CHPPM officials said that they could not measure the magnitude of noncompliance because they have not received all of the required quarterly consolidated lists of OEHS reports that have been completed. To improve OEHS reporting compliance, DOD officials said they were revising an existing policy to add additional and more specific OEHS requirements. OEHS data collection standards and practices have varied among the military services because each service has its own authority to implement broad DOD policies, and the services have taken somewhat different approaches. For example, although one water monitoring standard has been adopted by all military services, the services have different standards for both air and soil monitoring. As a result, for similar OEHS events, preventive medicine units may collect and report different types of data. Each military service’s OEHS practices for implementing data collection standards also have differed because of varying levels of training and expertise among the service’s preventive medicine units. For example, CHPPM officials said that Air Force and Navy preventive medicine units had more specialized personnel with a narrower focus on specific OEHS activities than Army preventive medicine units, which included more generalist personnel who conducted a broader range of OEHS activities. Air Force preventive medicine units generally have included a flight surgeon, a public health officer, and bioenvironmental engineers. Navy preventive medicine units generally have included a preventive medicine physician, an industrial hygienist, a microbiologist, and an entomologist. In contrast, Army preventive medicine unit personnel generally have consisted of environmental science officers and technicians. DOD officials also said other issues could contribute to differences in data collected during OIF. DHSD officials said that variation in OEHS data collection practices could occur as a result of resource limitations during a deployment. For example, some preventive medicine units may not be fully staffed at some bases. A Navy official also said that OEHS data collection can vary as different commanders set guidelines for implementing OEHS activities in the deployment theater. To increase the uniformity of OEHS standards and practices for deployments, the military services have made some progress—particularly in the last 2 years—through their collaboration as members of the Joint Environmental Surveillance Working Group. For example, the working group has developed a uniform standard, which has been adopted by all the military services, for conducting environmental health site assessments, which are a type of baseline OEHS report. These assessments have been used in OIF to evaluate potential environmental exposures that could have an impact on the health of deployed servicemembers and determine the types of routine OEHS monitoring that should be conducted. Also, within the working group, three subgroups— laboratory, field water, and equipment—have been formed to foster the exchange of information among the military services in developing uniform joint OEHS standards and practices for deployments. For example, DHSD officials said the equipment subgroup has been working collaboratively to determine the best OEHS instruments to use for a particular type of location in a deployment. The deployed military services have not submitted all the OEHS reports that the preventive medicine units completed during OIF to CHPPM for archiving, according to CHPPM officials. Since January 2004, CHPPM has compiled four summary reports that included data on the number of OEHS reports submitted to CHPPM’s archive for OIF. However, these summary reports have not provided information on the magnitude of noncompliance with report submission requirements because CHPPM has not received all consolidated lists of completed OEHS reports that should be submitted quarterly. These consolidated lists were intended to provide a key inventory of all OEHS reports that had been completed during OIF. Because there are no requirements on the specific number or type of OEHS reports that must be created for each base, the quarterly consolidated lists are CHPPM’s only means of assessing compliance with OEHS report submission requirements. Our analysis of data supporting the four summary reports found that, overall, 239 of the 277 bases had at least one OEHS baseline (139) or routine (211) report submitted to CHPPM’s centralized archive through December 2004. DOD officials suggested several obstacles that may have hindered OEHS reporting compliance during OIF. For example, CHPPM officials said there are other, higher priority operational demands that commanders must address during a deployment. In addition, CHPPM officials said that some of the deployed military services’ preventive medicine units might not understand the types of OEHS reports to be submitted or might view them as an additional paperwork burden. CHPPM and other DOD officials added that some preventive medicine units might have limited access to communication equipment to send reports to CHPPM for archiving. CHPPM officials also said that while they had the sole archiving responsibility, CHPPM did not have the authority to enforce OEHS reporting compliance for OIF—this authority rests with the Joint Staff and the commander in charge of the deployment. DOD has several efforts under way to improve OEHS reporting compliance. CHPPM officials said they have increased communication with deployed preventive medicine units and have facilitated coordination among each service’s preventive medicine units prior to deployment. CHPPM has also conducted additional OEHS training for some preventive medicine units prior to deployment, including both refresher courses and information about potential hazards specific to the locations where the units were being deployed. In addition, DHSD officials said they were revising an existing policy to add additional and more specific OEHS requirements. However, at the time of our review, a draft of the revision had not been released, and therefore specific details about the revision were not available. DOD has made progress in using OEHS reports to address immediate health risks during OIF, but limitations remain in employing these reports to address both immediate and long-term health issues. During OIF, OEHS reports have been used as part of operational risk management activities intended to assess, mitigate, and communicate to servicemembers any potential hazards at a location. There have been no systematic efforts by DOD or the military services to establish a system to monitor the implementation of OEHS risk management activities, although DHSD officials said they considered the relatively low rates of disease and nonbattle injury in OIF an indication of OEHS effectiveness. In addition, DOD’s centralized archive of OEHS reports for OIF is limited in its ability to provide information on the potential long-term health effects related to occupational and environmental exposures for several reasons, including limited access to most OEHS reports because of their security classification, incomplete data on servicemembers’ deployment locations, and the lack of a comprehensive federal research plan incorporating the use of archived OEHS reports. To identify and reduce the risk of immediate health hazards in OIF, all of the military services have used preventive medicine units’ OEHS data and reports in an operational risk management process. A DOD official said that while DOD had begun to implement risk management to address occupational and environmental hazards in other recent deployments, OIF was the first major deployment to apply this process throughout the deployed military services’ day-to-day activities, beginning at the start of the operation. The operational risk management process includes risk assessments of deployment locations, risk mitigation activities to limit potential exposures, and risk communication to servicemembers and commanders about potential hazards. Risk Assessments. Preventive medicine units from each of the services have generally used OEHS information and reports to develop risk assessments that characterized known or potential hazards when new bases were opened in OIF. CHPPM’s formal risk assessments have also been summarized or updated to include the findings of baseline and routine OEHS monitoring conducted while bases are occupied by servicemembers, CHPPM officials said. During deployments, commanders have used risk assessments to balance the identified risk of occupational and environmental health hazards, and other operational risks, with mission requirements. Generally, OEHS risk assessments for OIF have involved analysis of the results of air, water, or soil monitoring. CHPPM officials said that most risk assessments that they have received characterized locations in OIF as having a low risk of posing health hazards to servicemembers. Risk Control and Mitigation. Using risk assessment findings, preventive medicine units have recommended risk control and mitigation activities to commanders that were intended to reduce potential exposures at specific locations. For OIF, risk control and mitigation recommendations at bases have included such actions as modifying work schedules, requiring individuals to wear protective equipment, and increasing sampling to assess any changes and improve confidence in the accuracy of the risk estimate. Risk Communication. Risk assessment findings have also been used in risk communication efforts, such as providing access to information on a Web site or conducting health briefings to make servicemembers aware of occupational and environmental health risks during a deployment and the recommended efforts to control or mitigate those risks, including the need for medical follow-up. Many of the risk assessments for OIF we reviewed recommended that health risks be communicated to servicemembers. While risk management activities have become more widespread in OIF compared with previous deployments, DOD officials have not conducted systematic monitoring of deployed military services’ efforts to conduct OEHS risk management activities. As of March 2005, neither DOD nor the military services had established a system to examine whether required risk assessments had been conducted, or to record and track resulting recommendations for risk mitigation or risk communication activities. In the absence of a systematic monitoring process, CHPPM officials said they conducted ad hoc reviews of implementation of risk management recommendations for sites where continued, widespread OEHS monitoring has occurred, such as at Port Shuaiba, Kuwait, a deepwater port where a large number of servicemembers have been stationed, or other locations with elevated risks. DHSD officials said they have initiated planning for a comprehensive quality assurance program for deployment health that would address OEHS risk management, but the program was still under development. DHSD and military service officials said that developing a monitoring system for risk management activities would face several challenges. In response to recommendations for risk mitigation and risk communication activities, commanders may have issued written orders and guidance that were not always stored in a centralized, permanent database that could be used to track risk management activities. Additionally, DHSD officials told us that risk management decisions have sometimes been recorded in commanders’ personal journals or diaries, rather than issued as orders that could be stored in a centralized, permanent database. In lieu of a monitoring system, DHSD officials said that DOD considers the rates of disease and nonbattle injury in OIF as a general measure or indicator of OEHS effectiveness. As of January 2005, OIF had a 4 percent total disease and nonbattle injury rate—in other words, an average of 4 percent of servicemembers deployed in support of OIF had been seen by medical units for an injury or illness in any given week. This rate is the lowest DOD has ever documented for a major deployment, according to DHSD officials. For example, the total disease and nonbattle injury rate for the 1991 Gulf War was about 6.5 percent, and the total rate for Operation Enduring Freedom in Central Asia has been about 5 percent. However, while this indicator provides general information on servicemembers’ health status, it is not directly linked to specific OEHS activities and therefore is not a clear measure of their effectiveness. Access to archived OEHS reports by VA, medical professionals, and interested researchers has been limited by the security classification of most OEHS reports. Typically, OEHS reports are classified if the specific location where monitoring activities occur is identified. VA officials said they would like to have access to OEHS reports in order to ensure appropriate postwar health care and disability compensation for veterans, and to assist in future research studies. However, VA officials said that, because of these security concerns, they did not expect access to OEHS reports to improve until OIF has ended. Although access to OEHS reports has been restricted, VA officials said they have tried to anticipate likely occupational and environmental health concerns for OIF based on experience from the 1991 Persian Gulf War and on CHPPM’s research on the medical or environmental health conditions that exist or might develop in the region. Using this information, VA has developed study guides for physicians on such topics as health effects from radiation and traumatic brain injury and also has written letters for OIF veterans about these issues. DOD has begun reviewing classification policies for OEHS reports, as required by the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005. A DHSD official said that DOD’s newly created Joint Medical Readiness Oversight Committee is expected to review ways to reduce or limit the classification of data, including data that are potentially useful for monitoring and assessing the health of servicemembers who have been exposed to occupational or environmental hazards during deployments. Linking OEHS reports from the archive to individual servicemembers will be difficult because DOD’s centralized tracking database for recording servicemembers’ deployment locations currently does not contain complete or comparable data. In May 1997, we reported that the ability to track the movement of individual servicemembers within the theater is important for accurately identifying exposures of servicemembers to health hazards. However, the Defense Manpower Data Center’s centralized database has continued to experience problems in obtaining complete, comparable data from the services on the location of servicemembers during deployments, as required by DOD policies. Data center officials said the military services had not reported location data for all servicemembers for OIF. As of October 2004, the Army, Air Force, and Marine Corps each had submitted location data for approximately 80 percent of their deployed servicemembers, and the Navy had submitted location data for about 60 percent of its deployed servicemembers. Additionally, the specificity of location data has varied by service. For example, the Marine Corps has provided location of servicemembers only by country, whereas each of the other military services has provided more detailed location information for some of their servicemembers, such as base camp name or grid coordinate locations. Furthermore, the military services did not begin providing detailed location data until OIF had been ongoing for several months. DHSD officials said they have been revising an existing policy to provide additional requirements for location data that are collected by the military services, such as a daily location record with grid coordinates or latitude and longitude coordinates for all servicemembers. Though the revised policy has not been published, as of May 2005 the Army and the Marine Corps had implemented a new joint location database in support of OIF that addresses these revisions. During OIF, some efforts have been made to include information about specific incidents of potential and actual exposure to occupational or environmental health hazards in the medical records of servicemembers who may have been affected. According to DOD officials, preventive medicine units have been investigating incidents involving potential exposure during the deployment. For a given incident, a narrative summary of events and the results of any medical procedures generally were included in affected servicemembers’ medical records. Additionally, rosters were generally developed of servicemembers directly affected and of servicemembers who did not have any acute symptoms but were in the vicinity of the incident. For example, in investigating an incident involving a chemical agent used in an improvised explosive device, CHPPM officials said that two soldiers who were directly involved were treated at a medical clinic, and their treatment and the exposure were recorded in their medical records. Although 31 servicemembers who were providing security in the area were asymptomatic, doctors were documenting this potential exposure in their medical records. In addition, the military services have taken some steps to include summaries of potential exposures to occupational and environmental health hazards in the medical records of servicemembers deployed to specific locations. The Air Force has created summaries of these hazards at deployed air bases and has required that these be placed in the medical records of all Air Force servicemembers stationed at these bases. (See app. I for an example.) However, Air Force officials said no follow-up activities have been conducted specifically to determine whether all Air Force servicemembers have had the summaries placed in their medical records. Similarly, the Army and Navy jointly created a summary of potential exposure for the medical records of servicemembers stationed at Port Shuaiba, the deepwater port used for bringing in heavy equipment in support of OIF where a large number of servicemembers have been permanently or temporarily stationed. Since December 2004, port officials have made efforts to make the summary available to servicemembers stationed at Port Shuaiba so that these servicemembers can include the summary in their medical records. However, there has been no effort to retroactively include the summary in the medical records of servicemembers stationed at the port prior to that time. According to DOD and VA officials, no federal research plan that includes the use of archived OEHS reports has been developed to evaluate the long- term health of servicemembers deployed in support of OIF, including the effects of potential exposure to occupational or environmental hazards. In February 1998 we noted that the federal government lacked a proactive strategy to conduct research into Gulf War veterans’ health problems and suggested that delays in planning complicated researchers’ tasks by limiting opportunities to collect critical data. However, the Deployment Health Working Group, a federal interagency body responsible for coordinating research on all hazardous deployments, recently began discussions on the first steps needed to develop a research plan for OIF. At its January 2005 meeting, the working group tasked its research subcommittee to develop a complete list of research projects currently under way that may be related to OIF. VA officials noted that because OIF is ongoing, the working group would have to determine how to address a study population that changes as the number of servicemembers deployed in support of OIF changes. Although no coordinated federal research plan has been developed, other separate federal research studies are underway that may follow the health of OIF servicemembers. For example, in 2000 VA and DOD collaborated to develop the Millennium Cohort study, a 21-year longitudinal study evaluating the health of both deployed and nondeployed military personnel throughout their military careers and after leaving military service. According to the principal investigator, the Millennium Cohort study was designed to examine the health effects of specific deployments if enough servicemembers in that deployment enrolled in the study. However, the principal investigator said that as of February 2005 researchers had not identified how many servicemembers deployed in support of OIF had enrolled in the study. In another effort, a VA researcher has received funding to study mortality rates among OIF servicemembers. According to the researcher, if occupational and environmental data are available, the study will include the evaluation of mortality outcomes in relation to potential exposure for OIF servicemembers. As we stated in our report, DOD’s efforts to collect and report OEHS data could be strengthened. Currently, OEHS data that the deployed military services have collected during OIF may not always be comparable because of variations among the services’ data collection standards and practices. Additionally, the deployed military services’ uncertain compliance with OEHS report submission requirements casts doubt on the completeness of CHPPM’s OEHS archive. These data shortcomings, combined with incomplete data in DOD’s centralized tracking database of servicemembers’ deployment locations, limit CHPPM’s ability to respond to requests for OEHS information about possible exposure to occupational and environmental health hazards of those who are serving or have served in OIF. DOD officials have said they are revising an existing policy on OEHS data collection and reporting to add additional and more specific OEHS requirements. However, unless the military services take measures to direct those responsible for OEHS activities to proactively implement the new requirements, the services’ efforts to collect and report OEHS data may not improve. Consequently, we recommended that the Secretary of Defense ensure that cross-service guidance is created to implement DOD’s policy, once that policy has been revised, to improve the collection and reporting of OEHS data during deployments and the linking of OEHS reports to servicemembers. DOD responded that cross-service implementation guidance for the revised policy on deployment OEHS would be developed by the Joint Staff. While DOD’s risk management efforts during OIF represent a positive step in helping to mitigate potential environmental and occupational risks of deployment, the lack of systematic monitoring of the deployed military services’ implementation activities prevents full knowledge of their effectiveness. Therefore, we recommended that the military services jointly establish and implement procedures to evaluate the effectiveness of risk management efforts. DOD partially concurred with our recommendation and stated that it has procedures in place to evaluate OEHS risk management through a jointly established and implemented lessons learned process. However, in further discussions, DOD officials told us that they were not aware of any lessons learned reports related to OEHS risk management for OIF. Furthermore, although OEHS reports alone are not sufficient to identify the causes of potential long-term health effects in deployed servicemembers, they are an integral component of research to evaluate the long-term health of deployed servicemembers. However, efforts by a joint DOD and VA working group to develop a federal research plan for OIF that would include examining the effects of potential exposure to occupational and environmental health hazards have just begun, despite similarities in deployment location to the 1991 Persian Gulf War. As a result, we recommended that DOD and VA work together to develop a federal research plan to follow the health of servicemembers deployed in support of OIF that would include the use of archived OEHS reports. DOD partially concurred with our recommendation, and VA concurred. The difference in VA and DOD’s responses to this recommendation illustrates a disconnect between each agency’s understanding of whether and how such a federal research plan should be established. Therefore, continued collaboration between the agencies to formulate a mutually agreeable process for proactively creating a federal research plan would be beneficial in facilitating both agencies’ ability to anticipate and understand the potential long-term health effects related to OIF deployment versus taking a more reactive stance in waiting to see what types of health problems may surface. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any question you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Marcia Crosse at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the contacts named above, Bonnie Anderson, Assistant Director, Karen Doran, Beth Morrison, John Oh, Danielle Organek, and Roseanne Price also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Following the 1991 Persian Gulf War, research and investigations into the causes of servicemembers' unexplained illnesses were hampered by a lack of servicemember health and deployment data, including inadequate occupational and environmental exposure data. In 1997, the Department of Defense (DOD) developed a militarywide health surveillance framework that includes occupational and environmental health surveillance (OEHS)--the regular collection and reporting of occupational and environmental health hazard data by the military services. This testimony is based on GAO's report, entitled Defense Health Care: Improvements Needed in Occupational and Environmental Health Surveillance during Deployment to Address Immediate and Long-term Heath Issues (GAO-05-632). The testimony presents findings about how the deployed military services have implemented DOD's policies for collecting and reporting OEHS data for Operation Iraqi Freedom (OIF) and the efforts under way to use OEHS reports to address both immediate and long-term health issues of servicemembers deployed in support of OIF. Although OEHS data generally have been collected and reported for OIF, as required by DOD policy, the deployed military services have used different data collection methods and have not submitted all of the OEHS reports that have been completed. Data collection methods for air and soil surveillance have varied across the services, for example, although they have been using the same monitoring standard for water surveillance. For some OEHS activities, a cross-service working group has been developing standards and practices to increase uniformity of data collection among the services. In addition, while the deployed military services have been conducting OEHS activities, they have not submitted all of the OEHS reports that have been completed during OIF. Moreover, DOD officials could not identify the reports they had not received to determine the extent of noncompliance. DOD has made progress in using OEHS reports to address immediate health risks during OIF, but limitations remain in employing these reports to address both immediate and long-term health issues. OEHS reports have been used consistently during OIF as part of operational risk management activities intended to identify and address immediate health risks and to make servicemembers aware of the risks of potential exposures. While these efforts may help in reducing health risks, DOD has not systematically evaluated their implementation during OIF. DOD's centralized archive of OEHS reports for OIF has several limitations for addressing potential long-term health effects related to occupational and environmental exposures. First, access to the centralized archive has been limited due to the security classification of most OEHS reports. Second, it will be difficult to link most OEHS reports to individual servicemembers' records because not all data on servicemembers' deployment locations have been submitted to DOD's centralized tracking database. To address problems with linking OEHS reports to individual servicemembers, the deployed military services have tried to include OEHS monitoring summaries in the medical records of some servicemembers for either specific incidents of potential exposure or for specific locations within OIF. Additionally, according to DOD and Veterans Affairs (VA) officials, no federal research plan has been developed to evaluate the long-term health of servicemembers deployed in support of OIF, including the effects of potential exposures to occupational or environmental hazards. GAO's report made several recommendations, including that the Secretary of Defense improve deployment OEHS data collection and reporting and evaluate OEHS risk management activities and that the Secretaries of Defense and Veterans Affairs jointly develop a federal research plan to address long-term health effects of OIF deployment. DOD plans to take steps to meet the intent of our first recommendation and partially concurred with the other recommendations. VA concurred with our recommendation for a joint federal research plan.
|
In October 1998, we issued a report that raised concerns about the justification and the affordability of the Deepwater Replacement Project. Our major findings included the following: The Coast Guard had understated the remaining useful life of its aircraft and, to a lesser extent, its ships. For example, the justification the Coast Guard prepared in late 1995 estimated that its aircraft would need to be phased out starting in 1998. However, last year, the Coast Guard issued a study showing that its aircraft, with appropriate maintenance and upgrades, would be capable of operating until at least 2010 and likely beyond. The study’s findings suggest that in upgrading or replacing its deepwater ships and aircraft, the Coast Guard should give a relatively low priority to modernizing or replacing its aircraft. Since our report was issued, the Coast Guard has taken additional steps to assess the condition and the remaining useful life of its ships, including hiring naval architects to evaluate the condition of its deepwater ships and completing studies on two 378-foot cutters. According to a Deepwater Project official, contractors have also conducted their own evaluations of the condition of deepwater ships and aircraft to validate their condition. The Coast Guard had not conducted a rigorous analysis comparing the current capabilities of its aircraft and ships with current and future requirements, as required by DOT’s and the Coast Guard’s own guidance. Although the Coast Guard asserted that its current deepwater ships and aircraft were incapable of effectively performing future missions or meeting the future demand for its services, we were unable to validate these assertions. The Coast Guard had originally planned to complete a comparative assessment of the current capabilities and the functional needs of the future deepwater system by November 1998, but work on that assessment has slipped. The Coast Guard completed a baseline study of the capabilities of its existing fleet of ships and aircraft last month; a comparative assessment is planned for completion in April 1999. The Coast Guard lacked support for its estimates of the resource hours needed for its deepwater ships and aircraft to perform required missions. We attempted to verify the Coast Guard’s estimates of surface and aviation hours needed for deepwater law enforcement missions, which constitute over 95 percent of the total estimated mission-related hours for its ships and about 90 percent of the total estimated mission-related hours for its aircraft. We could not verify the reasonableness of these estimates because the sources for the data were not documented or available. An independent group—the Presidential Roles and Missions Commission—will study the Coast Guard’s roles and missions and report on its findings by October 1999. The Coast Guard plans to use this study to recalculate the operating levels needed to meet the requirements of its missions for its revised mission analysis, which is scheduled for completion in January 2000. We agree that the Coast Guard should start now to explore alternative ways to modernize its deepwater ships and aircraft. However, proceeding with the project without a clear understanding of the current condition of its ships and aircraft and whether they are deficient in their capabilities and service demands increases the risk that the contractors, now developing proposals for the project, could develop alternatives or designs that would not be the most cost-effective to meet the Coast Guard’s needs for the Deepwater Project. We recommended that the Coast Guard expedite the development and issuance of updated information from internal studies to the contractors. The Coast Guard agreed with our recommendation and has made progress in developing data on the condition of its ships and aircraft; however, other data on its roles and missions and any shortfalls in its performance capabilities will not be available until later this year or early next year. Contractors, however, are scheduled to provide the Coast Guard with an analysis of alternatives for the Deepwater Project later this month and conceptual designs for the system in December 1999. The Coast Guard agreed with the importance of providing contractors with accurate and complete data as soon as possible; however, it also noted the importance of starting now because of the long lead times associated with a project of this magnitude. The agency plans to provide the contractors with data on its roles and missions and performance shortfalls as soon as the information becomes available. Coast Guard officials believe that they will have data in enough time so as not to adversely affect the contractors’ proposals. We plan to continue monitoring the project to ensure that contractors receive timely and accurate data to include in their proposals. Our report also raised concerns about the project’s affordability. The estimated cost of the Deepwater Project could consume nearly all of the agency’s projected spending for its capital projects. Unless the Congress grants additional funds, which under current budget laws could mean reducing funding for other agencies or programs, the Coast Guard’s other capital projects could be severely affected. In January 1999, Coast Guard officials told us that they plan to address the Deepwater Project’s affordability issue in two ways. First, they believe that competition among three teams of contractors to develop alternative deepwater systems will help minimize the project’s life-cycle costs because the proposed costs will be one key factor in selecting the winning proposal. Second, they said that the agency’s independent evaluation group would analyze various funding alternatives to determine their impact on the project. The group will examine the most cost-effective funding amounts for the project as well as the minimum amount that is needed each year. However, until the Coast Guard develops its revised mission analysis in early 2000 and the contractors provide their cost estimates for various alternatives, it will not be known whether the affordability issue has been adequately addressed. Furthermore, the Coast Guard will have additional time to demonstrate that it has put in place a prudent strategy for dealing with the cost of the project within probable funding levels—a practice that becomes highly critical during this time of fiscal constraint. The ability of the Coast Guard to meet its future capital needs depends largely on the funding requirements for the Deepwater Project. The agency faces potential funding shortages of as much as $300 million by 2002 to complete ongoing and future projects. To deal with this, the Coast Guard must improve its capital planning process to prioritize and manage its capital projects more effectively, renew cost-saving efforts, and/or secure additional funds. In our May 1997 report, we discussed the challenges that the Coast Guard faces in the future as it buys ships, aircraft, and other equipment in a constrained budget environment. We reported that balanced budget agreements would create substantial pressure on the Coast Guard’s budget for capital spending in the coming years. Even with the current projections for surpluses in the federal budget, agencies such as the Coast Guard are still subject to spending limits and must continue to operate in a constrained budget environment. In an effort to balance the budget, caps on discretionary spending have been set. OMB develops budget marks, or targets, for agencies such as the Coast Guard so that they can develop budget plans and requests that are aligned with the marks. For fiscal years 2001 through 2004, OMB has set a mark of $485 million a year for the Coast Guard’s budget for capital spending. As figure 1 shows, the extent to which capital funding requirements are less than or greater than OMB’s target for fiscal years 2001 through 2004 depends largely on the amount needed for the Deepwater Project.Funding needs for ongoing acquisition projects decline steadily through fiscal year 2004 as projects such as the buoy tenders are completed. The funding needs for fiscal year 2001 are well within the budget mark set by OMB, mainly because the projected cost for the Deepwater Project is still relatively low, at $42.3 million. In later years, however, the ability of the Coast Guard to stay within the OMB mark for its budget for capital spending is more uncertain. For example, if the Coast Guard’s funding needs for the Deepwater Project and other new projects amount to $300 million annually beginning in 2002, the Coast Guard will experience a funding gap of about $100 million in fiscal year 2002 but little or no gap in fiscal years 2003 and 2004. If, on the other hand, funding requirements for the Deepwater Project approach its initial estimate of $500 million annually beginning in 2002, then the Coast Guard will face a substantial funding gap— exceeding $200 million in 2002 and beyond. The Coast Guard is developing a new capital planning process that, when implemented, could improve its ability to set priorities and manage its capital projects and ultimately provide a workable approach for acquiring the ships and aircraft it needs within its approved budget. Begun in 1997, this effort is directed at aligning capital needs with probable levels of funding. The Coast Guard’s previous capital plans simply identified various funding needs regardless of probable funding. The Coast Guard acknowledged that this approach no longer reflected the budget climate and needed revision. In January 1999, the agency produced a draft of a new capital plan that identifies strategies for dealing with the affordability of projects such as Deepwater. These strategies include the following: Implementing a number of techniques to control capital costs, such as extending the service life of the Coast Guard’s ships and aircraft and replacing equipment with fewer, more capable assets. As an example, extending the service life of aircraft rather than replacing them could result in significant cost savings. A Coast Guard study estimates that between $257 million and $297 million in upgrades and maintenance could extend the service lives of current deepwater aircraft by 11 to 28 years longer than the Coast Guard’s initial estimate of when it would have to phase out these aircraft. The Coast Guard estimates that a one-for-one replacement would cost $3.8 billion to replace the same aircraft, or about $3.5 billion more than the option to extend the aircraft’s service life. Establishing an “Investment Board” composed of senior agency managers, such as Assistant Commandants for Operations and Marine Safety, the Director of Resources, and the Chief Financial Officer. The board will examine the agency’s portfolio of assets and assign priorities to projects, including shore facilities, and build a range of budget scenarios over a 5-year period as a means of meeting the budget target given to the Coast Guard by OMB. This strategy would involve making trade-offs between projects. For example, the Coast Guard could concentrate its resources on buying more ships over 2 to 3 years and buying fewer aircraft or other equipment. After the ships have been bought, the agency could then focus its resources on buying the aircraft or other equipment and reducing the amount of resources used to buy ships. The Coast Guard believes that this approach can help it deal with “spikes” in the agency’s capital needs during a period of fiscal constraint. The Coast Guard is also striving to better link the capital planning process to its budgeting process. Linking capital planning to the budget process translates cost control strategies into action. As an example, the Department of Defense (DOD) links its capital planning process to its budget through its Future Years’ Defense Plan, which is updated each year to reflect changing conditions. This plan is linked to OMB budget targets and used to make programming and budgeting decisions over a 5-year budget cycle. It identifies strategies for meeting budget targets, such as cost-savings in operations that could be used to help fund capital requirements. In addition, according to an OMB official, the plan identifies the funds needed to complete projects and provides greater assurance that these funds will be available, which can ultimately lead to better-managed capital projects. The plan also allows the Congress to see where DOD is heading with its capital projects. Such a plan may be useful to the Coast Guard in developing plans and strategies for meeting its capital needs. The Coast Guard’s new capital planning process is still a work in process and the linkage between capital plans and the budgeting process may not be fully in place for several years, according to agency officials. While the plans, if implemented, will help the Coast Guard deal with affordability issues, it is still uncertain whether they will fully address the funding issues raised by the Deepwater Project. Better planning is not the only strategy the Coast Guard can follow to address potential capital funding gaps. Another option involves achieving cost savings from operations and using the savings to pay for new equipment in future years. Shifting funding amounts between the operations account and the capital account can be achieved in several ways. For example, as part of formulating the Coast Guard’s budget requests, OMB and the Coast Guard could engage in an informal process in which OMB would allow the Coast Guard to add to its capital account an amount equal to identified cost savings from operations (with a corresponding decrease in its operations account). DOD and OMB have agreed on such an approach, and DOD is pursuing a number of cost-saving initiatives in operations as a means of supplementing its budget for capital spending. A more formal mechanism for directing cost savings from operations to help fund capital needs would be to seek congressional authorization for a special budget account as a repository for such savings. As an example, DOD has received authorization to shift savings from its operations and maintenance account to help pay for capital acquisitions. Such an approach could provide incentives to Coast Guard managers to achieve greater cost savings if they had greater flexibility in deciding how to use the savings. In our May 1997 report, we identified cost-cutting options for the Coast Guard that had been already identified by a number of studies conducted since 1981. Last week, we reported on other administrative and support functions that have potential for cost savings. The agency has not implemented many of these options that we and others identified because they are controversial, require cultural changes within the Coast Guard, or are not popular with the public. Here are several examples of these options: Lengthen periods between assignment rotations for military personnel. Past studies by groups outside the Coast Guard have pointed out that this option could substantially reduce transfer costs, which now amount to more than $60 million a year. The Coast Guard thinks its current rotation policies are best and does not plan to study the issue further. Coast Guard officials said that changing current practices would have several undesirable effects, including potential adverse effects on multi-mission capabilities, a reduced opportunity to command a variety of units or vessels, and lower morale among personnel assigned to undesirable locations for extended periods of time. Use civilian personnel rather than military personnel in administrative support positions. This option could achieve significant cost savings. Overall, the Coast Guard has estimated that it costs about $15,000 more to compensate military personnel than comparable civilians. Consolidate functions or close facilities. Previous studies have identified this as another option to reduce expenditures. For example, several years ago, the Coast Guard identified a cost-cutting option involving the consolidation of its training facilities, a move that would have resulted in annual savings of $9 million, by closing the facility at Petaluma, California. Fearing a public outcry by the local community, especially because of the numerous recent closures of military bases in California, the Coast Guard postponed taking this step. To address situations like this, we recommended that the Congress may wish to consider a facility closure approach for the Coast Guard that is similar to the one DOD has used to evaluate base closures. Under this approach, an independent commission would be established and given authority to recommend the closure of some of the Coast Guard’s facilities. To date, such a commission has not been established. Another option for addressing any funding gap would be for the Coast Guard to secure new sources of funding for its capital projects. However, obtaining additional funding through the normal appropriation process is uncertain, given existing limits on discretionary funding. While this may change as the administration and the Congress deliberate on how to use the existing surplus in the federal budget, no agreements have been reached. For fiscal year 1999, the Coast Guard received an emergency appropriation totaling $230 million in addition to its regular appropriations to help buy new capital equipment. Most of these funds were for equipment to stem the flow of illegal drugs into the United States. The additional funds were used in part for ongoing capital projects, such as upgrades to C-130 aircraft engines and purchases of new sensors for Coast Guard ships and aircraft, potentially leaving more room in future years’ budgets for the Deepwater Project and other needs. Additional emergency funding may be available in future years as well. In January 1999, legislation was introduced in the Senate to authorize additional funding for the Coast Guard in fiscal years 2000 and 2001 for anti-drug operations; however, there is no guarantee that these funds will be appropriated. User fees are another potential source of revenue to supplement the Coast Guard’s future budgets for capital spending, but the Coast Guard has been unsuccessful in getting congressional approval to impose such fees on services it performs. Last year, the House and the Senate turned down a Coast Guard request to levy $165 million in user fees and stated its opposition to such fees. In its fiscal year 2000 budget request, the Coast Guard is again proposing a user fee on commercial cargo and cruise vessels for navigation services that the Coast Guard provides but does not charge for. This user fee, if approved, would add revenues of $41 million in the last quarter of fiscal year 2000, and $165 million a year when fully implemented. We are not taking a position on whether such fees, including the proposed fees on navigation services, should be established. This is a policy question that the Congress must ultimately decide after considering a number of issues and trade-offs. In conclusion, the Coast Guard faces the daunting task of meeting its capital needs in a constrained budget environment. To be successful, the agency must first satisfactorily justify the need for modernizing or replacing its deepwater ships and aircraft. Then, the Coast Guard must identify approaches and strategies for prioritizing and better managing its capital projects while continuing to pursue cost savings and other ways to help meet funding requirements. Mr. Chairman, this concludes my testimony. I will be happy to respond to any questions you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed the Coast Guard's plans for modernizing its ships, aircraft, and other capital assets needed to carry out its missions, as well as the agency's plans and strategies to fund these needs, focusing on the Coast Guard's progress in: (1) justifying the Deepwater Capability Replacement Project and addressing GAO's concerns about its affordability; and (2) developing strategies and plans for funding its capital needs within a constrained fiscal environment. GAO noted that: (1) the Coast Guard has not yet sufficiently justified the Deepwater Project, in that accurate and complete information is lacking on the performance shortcomings of its ships and aircraft and the resource hours needed to fulfill its missions; (2) proceeding without these key data increases the risk that contractors will develop alternatives that are not the most cost-effective to meet the needs of the project; (3) the Coast Guard and its contractors are developing this information, but some of it will not be available until later this year; (4) GAO also reported that if the cost of the Deepwater Project approaches the agency's initial estimate of $500 million annually, it would consume more than the agency now spends for all capital projects and leave little funding for other critical needs; (5) Coast Guard officials believe that competition among contractors will reduce the cost of the Deepwater Project and more closely align its potential cost with probable funding levels; (6) however, until the Coast Guard develops its new justification for the project in early 2000 and contractors provide their cost estimates for various alternatives, the Coast Guard will not know whether the affordability issue has been adequately addressed; (7) the costs of the Deepwater Project, together with funds needed to complete all other ongoing capital projects, may outstrip the Coast Guard's ability to pay for them; (8) the Office of Management and Budget (OMB) proposes freezing the Coast Guard's budget for capital spending at $485 million annually through fiscal year 2004; (9) if the Deepwater Project requires annual funding levels of $500 million, this cost, coupled with the costs of ongoing capital projects, would exceed the OMB target by about $300 million in 2002; (10) with good planning, renewed efforts to reduce costs, and better information on the useful life of its ships and aircraft, the Coast Guard may be able to prioritize its needs and minimize future capital needs; and (11) the Coast Guard is now developing a new plan and budget strategies for dealing with its capital funding needs in an environment of fiscal constraint, but putting these approaches in place may take several years and their effectiveness is uncertain.
|
When a contractor conducts a federal audit of claims data and identifies an overpayment, the contractor drafts an audit report and obtains comments from CMS, the provider, and the state before submitting the completed report to CMS. Within CMS, the MIG reviews the audit contractor’s submission and, when finalized, sends a final audit report (FAR) to the state, as well as the appropriate CMS regional office that oversees the state. The FAR identifies the total overpayment amount paid to the provider and specifies the amount of the federal share of that overpayment the state must return. To initiate recovery of the overpayment, the state sends a demand letter to the provider indicating the amount due as identified in the FAR. Providers may opt to appeal the findings in the FAR, thus initiating an appeals process that is determined by each state and is subject to the state’s Medicaid program requirements. process is complete, that decision determines the final amount owed by the provider. This final amount may be the full overpayment, a reduced amount of the overpayment, or nothing. When the state’s appeals The state generally has one year from the date of the FAR to recover the overpayment from the provider before reporting the return of the federal share to CMS. Federal law requires the state to return the federal share of the overpayment regardless of whether the state was able to recover it, unless the provider has been determined to be bankrupt or out of business.the state is only required to return the federal share for the net overpayment amount. States may also challenge the findings of a FAR by filing an appeal with the Department of Health and Human Services’ Appeals Board Appellate Division. CMS has had a long-standing requirement that states report overpayments and the return of the federal share on the CMS-64. Beginning in fiscal year 2010, CMS initiated a more detailed reporting requirement to better track overpayments and the return of the federal share from different types of audits. As a result, CMS required states to report overpayments from federal audits, as well as other sources—such as state audit results, Medicaid Fraud Control Units, and others—on each of the six line items specified for each type of audit. The reported overpayments are subtracted from the states’ Medicaid expenditures, which forms the basis for computing the federal share of program costs. CMS’s regional offices are responsible for overseeing states’ reporting of overpayments identified in the FARs and receive a copy of the FAR when it is sent to a state in its region. responsibility for ensuring that the state reports the overpayment identified and returns the federal share at the applicable federal match within one year from the date of the FAR. If the state does not return the federal share of an overpayment within one year, the state will be liable for interest on the federal share of overpayments not recovered and not returned. The FAR alerts the regional office of its (See fig. 1.) CMS must ensure that state expenditures claimed for federal matching on the CMS-64 are programmatically reasonable and allowable under federal laws, regulations, and policy guidance. To achieve this, CMS relies primarily on its regional offices’ financial and funding staff to perform quarterly reviews and validate state entries on the CMS-64. The 10 regional offices, located throughout the country, validate and audit the reported expenditure data and accompanying detailed information each quarter in their respective states. Federal audits conducted from June 2007 through February 2012 initially identified $20.4 million in potential Medicaid overpayments across 19 states, an amount that was reduced by $7.1 million, primarily due to successful provider appeals and settlements. Of the remaining $13.3 million in net overpayments, states recovered $9.8 million in overpayments as of March 2013, and state officials told us that they are in the process of recovering the remaining $3.5 million. (See fig. 2.) (Appendix I summarizes the potential Medicaid overpayments identified by federal audits, net overpayments, and state recoveries.) Of the $7.1 million in overpayment reductions, state officials told us that successful provider appeals and settlements accounted for $6.9 million of that total; the remaining overpayment reductions—approximately $186,000—represented overpayments that states had already identified and recovered or were overpayments that could not be recovered due to the provider filing for bankruptcy. State officials told us that appeals may be successful for a number of reasons. For example, in one state, providers successfully demonstrated that state law did not preclude them from receiving payments for services provided to Medicaid patients, even though they were not approved Medicaid providers. In another state, providers successfully appealed an audit’s finding that services were not medically necessary. The $9.8 million in recovered overpayments represented full recoveries in 15 of the 19 states we reviewed. Officials in the remaining 4 states told us they were in the process of recovering $3.5 million in overpayments, accounting for the remainder of the $13.3 million in net overpayments. Medicaid overpayments may be recovered by offsetting a provider’s subsequent Medicaid reimbursements against the balance due to the state, and officials in 3 states told us that, as a result, certain providers had not yet paid the full amounts owed. In the fourth state, officials told us that the overpayments had not been fully recovered for several reasons, including that (1) some providers had not provided full payment, (2) there were pending appeals, and (3) in one case, state officials were unable to confirm the receipt of the FAR from CMS. As a result, the state had not initiated the recovery process. States should have reported the return of the federal share for $13.3 million on line 5 of the CMS-64, the line designated for overpayments identified by federal audits. Based on data we collected from the states, we found that states made multiple errors reporting the return of the federal share for overpayments identified by federal audits, as detailed below. Instead of reporting $13.3 million, states reported the return of the federal share for $12.4 million, and did not report the return of the federal share for the remaining $855,000. Within the $12.4 million that was reported by states, $6.6 million was correctly reported on line 5 of the CMS-64, while the remaining $5.8 million was reported in the CMS-64, but not on line 5. (See fig. 3.) In addition to incomplete and inconsistent reporting of the return of the federal share, we identified errors in states’ reporting. In particular, states included $20.2 million in overpayments on line 5 of CMS-64 that were not related to federal audits. This $20.2 million represented errors by 4 states, one of which was a $20 million error made by 1 state. Officials provided several reasons for states’ errors in reporting overpayments, including not understanding CMS’s reporting requirements, frequent state staff turnover, and not being able to identify and, therefore, correctly report overpayments that resulted from federal audits.overpayments would be reported by June 2013. All 7 of the CMS regional offices we spoke with indicated that reviewing state reporting of the return of the federal share of overpayments was a routine part of their quarterly review of the CMS-64. Regional office officials noted that they review state documentation for the amounts entered on the form and check them against the FAR to make sure that states accurately report the return of the federal share. This review is also specifically noted as a required step in the CMS Financial Review Guide, which regional office financial analysts must follow when verifying state entries on the CMS-64.with states to improve reporting of overpayments on line 5 on the CMS- 64, but there was some variability in how the regional offices required states to correct errors in their reporting. For example, some regional offices verified the overpayment reported on lines other than line 5 of the CMS-64 and encouraged states to report appropriately in the future, while other regional offices required states to make a correction for the quarter in which the error appeared. Regional office officials also said that they work As part of their reviews, regional office officials noted that when they receive a FAR for one of the states in their region they follow-up with the state to make sure the state is aware of the FAR and the timeframes for returning the federal share. As a result of these efforts, CMS’s regional offices helped ensure the timely return of the federal share for most overpayments. For 59 of the 89 audits we reviewed, states reported the return of the federal share of the overpayment within 1 year as required by federal law. This on-time reporting of the return of the federal share represented $9.8 million out of $13.3 million in net overpayments related to federal audits. For the remaining 30 audits that states did not report the return of federal share within 1 year, $2.7 million was reported late and $855,000 was not reported at all. However, in some cases, regional offices were not always aware that states’ reporting was incomplete. In one regional office, the CMS analyst charged with reviewing a particular state’s CMS-64 was unaware that the state did not report returning the federal share from several FARs. In another regional office, it was unclear if the office had received copies of the numerous FARs that a particular state in its region had received in 2011 and 2012. Thus, the regional office was not aware of the need to look for these overpayments and the return of the federal share on the CMS-64. In these two cases, MIG officials were aware of the FARs but the regional offices were not. Additionally, the MIG does not regularly follow-up with the regional offices on the status of overpayments in light of CMS’s approach to divide responsibility related to state recovery and reporting of overpayments. Rather, the MIG interacts with the offices as needed. MIG officials acknowledged that at the beginning of the audit program, they had more contact with the regional offices on state recovery efforts. But such direct follow-up was not continued since the MIG may review data on line 5 of the CMS-64, enabling them to check the status of state overpayments directly, according to two regional office officials. However, given the errors we identified on the CMS-64, checking the status of state reporting of overpayments based on the reported CMS-64 data would not always yield a clear picture of states’ reporting of the federal share of overpayments. CMS’s audit approach for identifying Medicaid overpayments has shown some success in that states have returned most of the federal share of net overpayments—often within the one-year timeframe. However, state errors in reporting overpayments in the proper location of the CMS-64 prevent CMS from having a full understanding of the extent to which the federal share has been returned from the audits that they conduct. Officials in the regional offices play a critical role in reviewing and assisting states in reporting, and they are well positioned to ensure that states correctly report overpayments and the return of the federal share. Additionally, a full accounting of federal audit recoveries is an important gauge for measuring the effectiveness of CMS’s efforts to reduce improper payments, but states’ reporting of overpayments and the return of the federal share are not always clear or complete. Gaps such as these hamper federal efforts to quantify the results of state and federal activities, and make it difficult to determine the extent to which states are returning the federal share of these overpayments. To ensure the timely return of the federal share of Medicaid overpayments, the CMS Administrator should increase efforts to ensure that states are clearly reporting overpayments identified by federal audits on the designated location of the CMS-64 form. We provided a draft of this product to HHS for comment. In its written comments, reproduced in appendix II, HHS concurred with our recommendation and noted that CMS will increase efforts to ensure that states correctly report overpayments on the CMS-64 by providing additional training to states and regional offices on accurate reporting and improve internal processes to ensure timely resolution of incorrect reporting. HHS also provided technical comments that we incorporated, as appropriate. We are sending copies to the Secretary of Health and Human Services, the Administrator of CMS, and the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix III. Appendix I: Medicaid Overpayments Identified by Federal Audits Conducted June 2007 through February 2012 State (number of audits) Potential overpayments Net overpayments State recoveries AR (15) $130,862 $488,996 $489,966 CA (2) CO (1) DC (4) DE (6) FL (21) IA (1) KY (1) MD (3) MS (10) NE (1) NM (1) PA (1) SC (5) SD (1) TX (5) UT (1) VA (2) WA (8) In addition to the named above, key contributors to this report were: Rashmi Agarwal, Assistant Director; Walter Ochinko, Assistant Director; Sarah Harvey; Drew Long; JoAnn Martinez-Shriver; and Jennifer Whitworth. GAO, Medicaid Integrity Program: CMS Should Take Steps to Eliminate Duplication and Improve Efficiency, GAO-13-50 (Washington, D.C.: Nov. 13, 2012). National Medicaid Audit Program: CMS Should Improve Reporting and Focus on Audit Collaboration with States, GAO-12-814T (Washington, D.C.: June 14, 2012). National Medicaid Audit Program: CMS Should Improve Reporting and Focus on Audit Collaboration with States, GAO-12-627 (Washington, D.C.: June 14, 2012). Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs, GAO-12-803T (Washington, D.C.: June 7, 2012). Medicaid: Federal Oversight of Payments and Program Integrity Needs Improvement, GAO-12-674T (Washington, D.C.: April 25, 2012). Medicaid Program Integrity: Expanded Federal Role Presents Challenges to and Opportunities for Assisting States, GAO-12-288T (Washington, D.C.: Dec. 7, 2011). Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services, GAO-11-822T (Washington, D.C.: July 12, 2011). Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use, GAO-11-475 (Washington, D.C.: June 30, 2011). Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges, GAO-11-575T (Washington, D.C.: April 15, 2011). Status of Fiscal Year 2010 Federal Improper Payments Reporting, GAO-11-443R (Washington, D.C.: March 25, 2011). Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments, GAO-11-409T (Washington, D.C.: March 9, 2011). Medicare: Program Remains at High Risk Because of Continuing Management Challenges, GAO-11-430T (Washington, D.C.: March 2, 2011). Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue, GAO-11-318SP (Washington, D.C.: March 1, 2011). High-Risk Series: An Update, GAO-11-278 (Washington, D.C.: February 2011). Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight, GAO-10-143. (Washington, D.C.: March 31, 2010). Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States, GAO-09-1004T (Washington, D.C.: Sept. 30, 2009). Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States, GAO-09-957 (Washington, D.C.: Sept. 9, 2009). Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments, GAO-09-628T (Washington, D.C.: April 22, 2009). Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System, GAO-08-239T (Washington, D.C.: Nov. 14, 2007). Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System, GAO-08-17 (Washington, D.C.: Nov. 14, 2007). Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts, GAO-06-705 (Washington, D.C.: June 22, 2006). Medicaid Integrity: Implementation of New Program Provides Opportunities for Federal Leadership to Combat Fraud, Waste, and Abuse, GAO-06-578T (Washington, D.C.: March 28, 2006). Medicaid Fraud and Abuse: CMS’s Commitment to Helping States Safeguard Program Dollars Is Limited, GAO-05-855T (Washington, D.C.: June 28, 2005). Major Management Challenges and Program Risks: Department of Health and Human Services, GAO-03-101 (Washington, D.C.: January 2003).
|
While states and certain federal entities have had long-standing roles identifying Medicaid improper payments, the Deficit Reduction Act of 2005 expanded CMS's role in identifying improper payments. As a result, CMS created a national audit program, which uses federal contractors to audit state Medicaid claims and identify overpayments--payments that should not have been made or were higher than allowed--to providers. States are responsible for recovering any identified overpayments and reporting the return of the federal share of those overpayments to CMS. GAO was asked to examine states' efforts to recover and report overpayments identified by federal audits, and examine CMS's review of state reporting. This report assesses the extent to which: (1) states recovered Medicaid overpayments identified by federal audits and reported the return of the federal share, and (2) CMS reviewed states' reporting of Medicaid overpayments related to these federal audits. GAO obtained overpayment and recovery data from all states with an identified overpayment and compared this with CMS data; reviewed relevant laws, regulations, and CMS guidance; and interviewed CMS and state officials. States recovered $9.8 million in Medicaid overpayments, but they did not clearly report the overpayments and the return of the federal share to the Centers for Medicare & Medicaid Services (CMS) within the Department of Health and Human Services (HHS). Federal audits initially identified about $20.4 million in potential Medicaid overpayments across the 19 states with identified overpayments from June 2007 through February 2012. Of the $13.3 million in net overpayments shown below, states recovered $9.8 million and were in the process of recovering the remaining $3.5 million. States should have reported the return of the federal share for $13.3 million on the line designated for overpayments identified by national audit program contractors on the CMS-64--the form that states fill out quarterly to obtain federal reimbursement for Medicaid services. However, states made multiple reporting errors. Specifically: instead of reporting $13.3 million, states reported the return of the federal share for $12.4 million and did not report the return of the federal share for the remaining $855,000; and within the $12.4 million that was reported by states, $6.6 million was correctly reported on the CMS-64, while the remaining $5.8 million was reported on the CMS-64, but not on the correct line. CMS generally reviewed states' reporting of overpayments but was not always aware of incomplete reporting. All 7 of the CMS regional offices GAO spoke with indicated that reviewing states' reporting of the return of the federal share of overpayments was a routine part of their quarterly review of the CMS-64 and helped ensure the timely return of the federal share of overpayments in 59 of the 89 audits GAO reviewed. In some cases, though, regional offices were not always aware that states' reporting was incomplete. CMS's Medicaid Integrity Group may review data from these audits on the designated line of the CMS-64, which can be an important gauge for measuring the effectiveness of CMS's efforts to reduce improper payments. However, reviewing these data would not yield a clear picture of the return of the federal share of overpayments given the errors GAO identified in state reporting. GAO recommends that the CMS Administrator increase efforts to ensure that states are clearly reporting overpayments identified by federal audits in the designated location of the CMS-64 form. HHS concurred with this recommendation.
|
When Congress amended the Communications Act of 1934 in 1996, it added a provision to ensure access to communications technologies for people with disabilities. The provision, Section 255, required manufacturers of traditional telecommunications equipment, such as telephones, wireless handsets, or fax machines, and providers of traditional telecommunications services, such as telephone calls, voice mail, or interactive voice response systems to “ensure that [their products and services] be accessible and usable by people with disabilities” if making them so was “readily achievable.”service to be “accessible” means that it must have certain input, control, and mechanical functions and meet certain information requirements to For a product or operate and use the product. Whether making a product or service accessible is “readily achievable” depends, among other things, on the cost of the required action and the overall financial resources of the entity responsible for bearing that cost.equipment covered under section 255 include adjustable displays, screen readers, volume control, and tactile alerts. Examples of accessibility features for CVAA was enacted to increase the access of people with disabilities to modern communications. Section 716, a new section of the Communications Act added by CVAA, requires manufacturers and service providers of “advanced communications services” to make their products accessible to people with disabilities. Advanced communication services include VoIP services, electronic messaging services, and interoperable video conferencing services. According to FCC, examples of accessibility features for equipment covered under sections 716 also include adjustable displays, screen readers, volume control, and tactile alerts. Section 718, also added by CVAA, introduced a similar accessibility requirement for manufacturers and service providers for Internet browsers built into mobile phones. browser—for example, typing a web address in the address bar; identifying and activating home, back, forward, refresh, reload, and stop buttons; viewing status information; and activating zooming or other features—must be accessible to individuals who are blind or visually impaired. Importantly, advanced communication services were required to be made accessible unless doing so was not achievable,rigorous standard than that applied to traditional telecommunications. 47 U.S.C. § 619. an evaluation of accessibility barriers that exist with respect to new communications technologies, and complaints received during the reporting period and their outcomes, among other things. See figure 1 for a timeline of selected CVAA requirements and FCC’s actions to implement them. FCC established accessibility procedures for filing complaints alleging violations of CVAA’s accessibility requirements and enforcement procedures for cases of a violation on October 7, 2011, within the 1-year statutory deadline. See appendix III for more detail on FCC’s actions implementing Section 717’s complaint and enforcement provisions. In developing its complaint and enforcement regulations and procedures, FCC took into account industry and consumer feedback. For example, 2 weeks after CVAA’s October 2010 enactment, FCC issued a public notice soliciting input on complaint and enforcement procedures, and 6 months after enactment, a Notice of Proposed Rulemaking seeking public comment on implementing both complaint and enforcement requirements. FCC received about 100 public comments from October 2010 to May 2011 after publishing those two solicitations. When FCC adopted its complaint procedures in October 2011, it discussed its rationale for accepting or rejecting some public suggestions. FCC continued to seek and receive public comments after adoption of the procedures to prepare its first biennial report, published in October 2012, to Congress on FCC’s implementation of CVAA. FCC’s complaint and enforcement procedures contain three types of actions that consumers can take with FCC’s assistance. According to FCC, the procedures enable consumers to contact companies directly by using FCC’s database of company contact information—the Recordkeeping Compliance Certification and Contact Information (RCCCI) Registry. facilitate settlements between consumers and companies as informally as possible and assist consumers with bringing their complaints to companies. The three types of consumer actions FCC established are (1) the pre-complaint Request for Dispute Assistance (RDA) process, (2) the informal complaint process, and (3) the formal complaint process. As of April 1, 2015, FCC had received 48 RDAs and no informal or formal complaints. Example of a Request for Dispute Assistance A blind consumer was having difficulty finding an accessible mobile handset that had a full read out of all menu levels and audible keystrokes to work with his mobile service provider and that had sufficiently strong signal reception to work in his home. FCC’s Disability Rights Office facilitated a resolution by having the service provider work with the consumer to test several models until the consumer was able to find an accessible handset that he could use. FCC requires companies who are subject to the accessibility requirements of Sections 255, 716, and 718 to submit contact information to the RCCCI Registry. 47 C.F.R. § 14.31(b)(2). day period ends. If a consumer takes no action after 60 days, FCC closes the case. See figure 2 for a flowchart of the RDA process. According to FCC’s October 2011 Report and Order,RDA process is to provide an appropriate amount of time to facilitate settlements and provide assistance to consumers to rapidly and efficiently resolve accessibility issues with companies. In addition, FCC’s intention with the RDA process is to lessen the hesitation of some consumers to approach companies about their concerns or complaints by themselves. Further, the report and order stated that FCC’s involvement before a complaint is filed will benefit both consumers and industry by helping to clarify the accessibility needs of consumers for the manufacturers or service providers against which they may be contemplating a complaint, encouraging settlement discussions between the parties, and resolving accessibility issues without the expenditure of time and resources in the informal complaint process. According to FCC, 48 RDAs were filed between October 8, 2013, the date the new CVAA accessibility complaint rules took effect, and April 1, 2015. DRO facilitated a resolution for 43 RDAs by contacting the consumer and the company, and none were escalated to an informal complaint for investigation. DRO dismissed 2 the pre-complaint RDAs because it was unable to obtain a response from the consumer to obtain additional information about the accessibility problem or to facilitate resolution. One RDA was withdrawn by the consumer, and at the time of our review, 2 RDAs were pending as dialog continued among DRO, the consumers, and the service providers. The informal complaint process begins when a consumer files an informal complaint form. DRO fills out the form based on the information from the consumer’s RDA filing. As required by CVAA, a consumer has multiple methods for filing the informal complaint form, including telephone, online, mail, e-mail, and fax. FCC’s procedures require the Enforcement Bureau (EB) to review the informal complaint form for completeness, including whether it shows a possible violation. EB will dismiss the complaint without prejudice to refile if the form is incomplete, but the consumer may then refile an informal complaint with the correct information. If complete, EB will serve the complaint to the company named in the complaint, which must respond within 20 days of receiving the complaint and send the consumer and FCC a non-confidential summary of its response. In this answer, the company must provide sufficient evidence demonstrating that the product or service is accessible, or, if not accessible, that accessibility was not achievable under the rules or readily achievable. The consumer may reply to the company’s response in writing but must do so within 10 days. EB will investigate the consumer’s allegation and the company’s written response, and is required by CVAA to issue an order within 180 days finding whether the company violated sections 255, 716, or 718—i.e., that the company did not ensure its product was accessible and could not prove that ensuring accessibility was unachievable—or any of FCC’s implementing rules described in the Code of Federal Regulations. If a violation is found, FCC may direct the company to bring its product into compliance within a reasonable period of time and take other appropriate enforcement action. The rules allow a company the opportunity to comment on a proposed remedial action before FCC issues a final order. See figure 3 for a flowchart of the informal complaint process. FCC’s rationale for making an RDA a prerequisite for filing an informal complaint is that after a consumer has undertaken the RDA process, all parties involved—DRO officials, the consumer, and the company—should have identified the correct manufacturer or service provider that the consumer will name in the informal complaint and the consumer should have obtained all the information necessary to satisfy the minimum requirements for filing an informal complaint. FCC does not intend the informal complaint process to require legal representation and allows at any time for both parties to resolve the problem on their own or the consumer to withdraw the informal complaint and file another RDA. To file a formal complaint, the consumer must first obtain an FCC registration number. FCC sends the complaint to the company, which must answer with a defense within 20 days of receipt. The consumer may file and serve a reply to the company’s defense but must do so within 3 days of receiving the company’s answer. EB staff review and investigate the consumer’s formal complaint filings and the company’s answer to the complaint, and make a decision. FCC’s procedures do not include a deadline for making a decision on the complaint. As shown in figure 4, the filing of an RDA or an informal complaint is not a prerequisite for filing a formal complaint. Section 717 directed FCC to establish procedures for enforcement actions against violations of Sections 255, 716, and 718. The procedures include the process established for handling informal and formal complaints. As of April 1, 2015, FCC had not taken any enforcement actions against a company because no informal or formal complaints had been filed. FCC has undertaken numerous efforts to conduct its CVAA-mandated informational and educational program to inform the public about the act’s protections and remedies. FCC established a clearinghouse of information on accessibility products and services within the 1-year statutory deadline, and is conducting an outreach program to inform the public about the clearinghouse and CVAA’s protections and remedies. For example, FCC has hosted seminars and webinars on accessibility issues; published consumer guides and news releases; sought public comment on rulemaking and waiver requests in advance of orders; and updated subscribers to its public e-mail service on accessibility-related information. FCC also maintains the Accessibility Clearinghouse on its website, which provides links to, for example, disability organizations, resources by type of disability, products and services, and consumer guides. In addition, FCC established the Disability Advisory Committee, which is charged with making recommendations to FCC on the full range of disability access issues, suggesting ways to facilitate the participation of consumers with disabilities in proceedings before FCC, and providing an effective means for stakeholders to exchange ideas and develop recommendations for accessibility solutions. While FCC has undertaken efforts to conduct public outreach, it has not evaluated the effectiveness of its efforts. We previously identified nine key practices that are important to conducting a consumer education campaign. the key practices we identified and found FCC’s efforts did not always align with the key practices, as shown in table 1. In particular, we found FCC’s efforts do not align with two of the key practices related to defining goals and objectives and establishing process and outcome metrics. The primary action FCC has taken to ensure industry’s compliance with CVAA’s recordkeeping requirements was to establish the Recordkeeping Compliance Certification and Contact Information (RCCCI) Registry, which is maintained on FCC’s website. Companies subject to any of CVAA’s accessibility requirements must submit an annual certification to FCC that they are maintaining records of their efforts to make their products accessible through the RCCCI Registry. Companies enter three types of information into RCCCI: (1) contact information for a person within the company authorized to resolve consumer complaints; (2) contact information for a person within the company in the event of an informal or formal complaint; and (3) a certification that the company is maintaining records of its efforts to make its products and services accessible. To inform companies of their recordkeeping and annual certification obligations, FCC has been posting public notices and sending e-mails to its AccessInfo subscribers since 2013, reminding them to submit the required information into the RCCCI Registry annually by April 1. The RCCCI Registry was made available on FCC’s website in January 2013, and as of April 2015, FCC had received approximately 3,777 industry certifications. During our review, FCC officials were unable to say whether the actual number of compliance certifications recorded in the RCCCI Registry represents full industry compliance with CVAA’s certification requirements. In particular, although FCC estimated over 28,000 companies could be affected by CVAA’s requirements, FCC officials told us that this estimate does not accurately capture the number of companies subject to CVAA’s recordkeeping requirements. Additionally, we found some companies that certified compliance with CVAA’s recordkeeping requirements did not believe they were subject to CVAA. For example, a representative from a certifying company in Nebraska we selected randomly to interview told us the company was not maintaining records in accordance with CVAA since the company had not received any complaints about the inaccessibility of its services and there were no people with disabilities in its service area. However, CVAA’s accessibility and recordkeeping requirements are tied to the manufacturing of communications equipment or the provision of communications services, and not whether a company’s clients, in its judgment, are people with disabilities. A few companies responding to our survey indicated that they did not believe the accessibility or recordkeeping requirements of the act applied to them since they were wholesalers rather than retailers of telecommunications or advanced communications services. Nevertheless, these companies had self-certified to FCC that they were complying with CVAA’s recordkeeping requirements. Furthermore, FCC cannot say whether industry is fully complying with the requirements to make products and services accessible. As stated previously, CVAA requires FCC to report biennially to Congress on the level of industry compliance with Sections 255, 716, and 718. However, FCC lacks an objective measure for making this determination. In the 2014 biennial report to Congress, FCC based its determination of industry compliance primarily on the public comments it received and industry association reports to prepare the final report. For example, based on comments it received, FCC determined that Section 255 companies increased the availability of accessible telecommunications equipment and that Section 716 and 718 companies “made efforts to comply.” Although such commentary does not enable FCC to objectively determine industry compliance, developing an objective measure might not be cost-effective because to date, FCC has received no informal or formal complaints asserting non-compliance with CVAA accessibility requirements. Stakeholders we surveyed and interviewed generally reported that CVAA’s recordkeeping obligations have not affected the development and deployment of new communications technologies. We asked companies if CVAA’s requirements to maintain records of their efforts to make their products and services accessible affected the development and deployment of new communications technologies. As shown in figure 5, based on responses from the random sample of certifying companies that we surveyed, estimates of the percentages of companies that view CVAA’s recordkeeping requirements as having had no effect on their development and deployment of new communications technology range from 59 and 70 percent (across the four recordkeeping requirements). All percentage estimates from the survey have margins of error at the 95 percent confidence level of plus or minus 14 percentage points or less, unless otherwise noted. Furthermore, we estimate that between 27 and 30 percent of companies view the recordkeeping requirements as posing little or no burden to their company; between 47 and 53 percent view the requirements as posing either “some” or a “moderate” burden. A few companies that described themselves as small commented that the burdens of complying with the recordkeeping requirement had little or no corresponding benefit to consumers. Stakeholders we interviewed did not raise specific harms or benefits associated with CVAA’s recordkeeping obligations on the development and deployment of new communications technologies. As part of our survey, we also asked companies if CVAA’s requirements to make their products and services accessible affected the development and deployment of new communications technologies. As shown in figure 6, we estimate that 72 percent of companies view those requirements as having had no effect on their development and deployment of new communications technologies. An estimated 28 percent of companies view the product accessibility requirements as posing little to no burden on their businesses; another 59 percent described those requirements as posing either “some” or a “moderate” burden on their companies. Additionally, we estimate that 21 percent of companies increased their efforts to incorporate accessibility features into their products or services due to the passage of CVAA. Another 64 percent reported that there had been no change in their efforts to incorporate accessibility features in their products or services as a result of CVAA; however, many of these companies indicated that they had been subject to product accessibility A few companies responding to our requirements before its passage.survey commented that either because they were small or because they were service providers rather than manufacturers, they had very little or no influence over whether technology was more accessible and that the primary way for them to increase the accessibility of their services was through vendor selection. In our interviews, industry associations and advocates for people with disabilities generally indicated that product accessibility had improved since the passage of CVAA. Representatives from industry associations highlighted a number of association-led efforts to bring industry and consumers together to ensure that the needs of consumers with disabilities are being addressed. For example, the Consumer Electronics Association hosts an accessibility roundtable at its annual International Consumer Electronics Show. FCC officials noted that the collaboration of CTIA-The Wireless Association was instrumental in launching the accessibility clearinghouse, and CTIA started its own Accessibility Outreach Initiative in 2013. Representatives from the Telecommunications Industry Association cited their role in efforts to increase the quality of personal sound amplification devices. Advocates for people with disabilities indicated that there were still many ways in which the accessibility of communications technology could be further improved, but some believed that CVAA resulted in more widely available accessible technology. CVAA requires manufacturers and service providers of advanced communications services to make their products accessible to people with disabilities and requires FCC to inform the public about the availability of the act’s protections and remedies. Although FCC has undertaken numerous efforts to conduct its informational and educational program, we found FCC could strengthen its public outreach efforts by following key practices, especially related to (1) defining objectives and (2) establishing process and outcome metrics. FCC has spent time and resources conducting public outreach, such as hosting seminars on accessibility issues and attending accessibility conferences. However, FCC has not established formal objectives for the outreach program or process and outcome metrics to assess the extent to which the program is meeting CVAA’s goals. It is important for FCC to have a clear understanding of the goals and objectives of its public outreach prior to establishing the necessary targets to measure the effectiveness of its outreach efforts. By establishing process and outcome metrics, FCC could determine whether the current levels of budgetary and other resources allocated to the outreach program need adjustment. Furthermore, FCC could better ensure the quality, quantity, and timeliness of its outreach regarding CVAA’s protections and remedies. We recommend the Chairman of FCC evaluate the effectiveness of FCC’s accessibility-related public outreach efforts and ensure those efforts incorporate key practices identified in this report, such as defining objectives and establishing process and outcome metrics. We provided a draft of this report to FCC for its review and comment. FCC provided written comments (reprinted in app. IV) and technical comments, which we incorporated as appropriate. In the written comments, FCC stated that it agreed with our recommendation and noted that evaluating its public outreach efforts could enable FCC to determine whether current resources allocated to these efforts are appropriate and help to ensure the quality, quantity, and timeliness of such efforts. FCC stated it is committed to achieving CVAA’s goals with respect to conducting public outreach and it will look to define objectives and establish process and outcome metrics to measure success in achieving those objectives. We are sending copies of this report to the Chairman of the Federal Communications Commission and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Section 717 of the Communications Act of 1934, as amended by the 21st Century Communications and Video Accessibility Act of 2010 (CVAA), included a provision that we evaluate the Federal Communication Commission’s (FCC) compliance with the enforcement and recordkeeping obligations contained in the act. This report presents information on (1) the extent to which FCC established complaint and enforcement procedures within the time frames required by CVAA and conducted public outreach, (2) the actions FCC has taken to ensure industry compliance with CVAA’s recordkeeping provisions and to determine the level of industry compliance with accessibility requirements, and (3) stakeholders’ views on the effect of CVAA’s recordkeeping obligations on the development and deployment of new communications technologies.We did not assess the effectiveness of CVAA’s enforcement provisions and FCC’s enforcement procedures to ensure compliance with CVAA because FCC had not taken any enforcement actions at the time of our review. To determine whether FCC established complaint and enforcement procedures within the required time frames of CVAA, we reviewed the Section 717 provisions related to those procedures and compared them with FCC’s implementing actions. Specifically, we reviewed the procedures FCC publically released in its October 2011 Report and Order and Further Notice of Proposed Rulemaking, those procedures published as final rules in December 2011 in the Federal Register, and the final rules codified in the U.S. Code of Federal Regulations (C.F.R).To determine what information FCC provides to the public about filing accessibility complaints, we reviewed its Consumer Help Center web page ( ), which FCC had redesigned and opened during the course of our review. The Consumer Help Center instructs the public in how to seek FCC’s assistance and file complaints not only on accessibility issues, but also on other issues related to TV, phone, internet, radio, and emergency communications. To determine FCC’s efforts to inform the public about the availability of the clearinghouse and the protections and remedies available under sections 255, 716, and 718,http://ach.fcc.gov/; Clearinghouse of Information, which can be found at October 2012 and October 2014 biennial reports to Congress; online Consumer Help Center; and [email protected] e-mails that update public subscribers on accessibility and disability issues. In addition, we reviewed FCC’s October 2011 Report and Order and Further Notice of Proposed Rulemaking and other FCC documents to determine whether FCC established the Accessibility Clearinghouse within the time frames required by CVAA. We assessed FCC’s public outreach efforts against the key practices for consumer education planning identified in our November 2007 report.education campaign, we convened an expert panel composed of 14 senior management-level experts in strategic communications. We selected these experts, who represented private, public, and academic we reviewed the contents of FCC’s Accessibility To identify the key practices of a consumer institutions, based on their experience overseeing a strategic communications or social marketing campaign or other relevant expertise. To determine the actions FCC has taken to ensure industry compliance with CVAA’s recordkeeping and annual certification obligations, we reviewed FCC’s October 2011 Report and Order and Further Notice of Proposed Rulemaking, October 2012 and October 2014 biennial reports to Congress; and interviewed officials from FCC’s Consumer and Governmental Affairs Bureau and Enforcement Bureau. We also assessed FCC’s efforts for determining the level of industry compliance with Sections 255, 716, and 718 by analyzing FCC’s assessments in its biennial reports to Congress, analyzing data (covering January 2013 to April 2015) from the Recordkeeping Compliance Certification and Contact Information Registry to determine the number of distinct companies certifying their compliance with CVAA, and interviewing Consumer and Governmental Affairs Bureau officials. To determine the telecommunications industry’s views on the effect of CVAA’s certification and recordkeeping obligations on the development and deployment of new communications technologies, we conducted a web-based survey. We drew a stratified random sample of 355 companies from the population of 3,549 companies that had submitted recordkeeping certifications to FCC as of December 5, 2014. To obtain these 355 companies, we divided the companies into those that had identified themselves as only being subject to the product accessibility requirements of Section 255 of the Communications Act, and those subject to the product accessibility requirements of Sections 716 and 718 of the Communications Act. Companies that submitted a certification to FCC without identifying themselves by a section were excluded from the sample. Additionally, because oftentimes the same person served as the contact person for multiple companies, we took a systematic sample within each stratum to ensure that no one respondent received more than four surveys from us. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. Each sample element was subsequently weighted in the analysis to account statistically for all the members of the population, including those who were not selected. To ensure that our survey questions were clear and logical and that respondents could answer the questions without undue burden, we pre- tested our questionnaire with five companies. We interviewed pretest companies that varied in company size, accessibility requirement section, and equipment manufacturing or service provision. We administered the survey from February 2015 through March 2015. We received 173 responses, for a 49 percent response rate. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All percentage estimates from the survey have margins of error at the 95 percent confidence level of plus or minus 14 percentage points or less, unless otherwise noted. In addition to sampling errors, the practical difficulties of conducting any survey may also introduce other types of errors, commonly referred to as non-sampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information available to respondents, or in how the data were entered into a database or were analyzed can introduce unwanted variability into the survey results. With this survey, we took a number of steps to minimize these non-sampling errors. For example, our staff with subject matter expertise designed the questionnaire in collaboration with our survey specialists. As noted earlier, the questionnaire was pretested to ensure that the questions were relevant and clearly stated. When the data were analyzed, a second independent GAO analyst independently verified the analysis programs to ensure the accuracy of the code and the appropriateness of the methods used for the computer-generated analysis. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, thereby eliminating the need to have the data keyed into a database, thus avoiding a source of data entry error. In addition, as shown in table 2, we interviewed representatives from 15 organizations to obtain their views of FCC’s action to implement the provisions of CVAA and CVAA’s effect on increasing the accessibility for people with disabilities. We selected organizations that had submitted two or more comments in reply to FCC’s solicitation of public comments on proposed rules, regulations, and biennial report drafts. The questions we asked in our survey of certifying officers of companies subject to the reporting requirements of Section 717 of the Communications Act of 1934, as amended by Title I of the 21st Century Communications and Video Accessibility Act of 2010, P.L. 111-260 (CVAA) are shown below. Our survey was composed of closed- and open-ended questions. In this appendix, we include all survey questions and aggregate results of responses to selected closed-ended questions; we do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our methodology, see appendix I. Data intentionally not reported for questions in Section 1: Demographics This questionnaire covers the following company: (Company name was inserted) 1. Which of the following describes your company? (Check all that apply.) Section 255 - Interconnected VoIP Equipment Manufacturer Section 255 - Interconnected VoIP Service Provider Section 255 - Telecommunications Equipment Manufacturer Section 255 - Telecommunications Service Provider Section 716 - Electronic Messaging Equipment Manufacturer Section 716 - Electronic Messaging Service Provider Section 716 - Interoperable Video Conferencing Equipment Section 716 - Interoperable Video Conferencing Service Provider Section 716 - Non-interconnected VoIP Equipment Manufacturer Section 716 - Non-interconnected VoIP Service Provider Section 718 - Mobile Phone Equipment Manufacturer that includes Section 718 - Mobile Phone Service Provider that arranges for inclusion of an Internet browser Other - Please specify: ______________ 2. How many people did your company employ as of December 31, 2014? (Please include part-time and temporary staff.) 3. What was your company’s annual revenue for calendar year 2014? Calendar year 2014 annual revenue: $ ________ 4a. Approximately what percentage of your company’s annual revenue is accounted for by products or services that are required to be made accessible under Sections 255, 716, and 718 of the Communications Act as amended by CVAA? If you selected “None,” please explain: 4b. Approximately what percentage of your company’s products or services is required to be made accessible under Sections 255, 716, and 718 of the Communications Act as amended by CVAA? Please note: Sections 255, 716, and 718 contain requirements for manufacturers and service providers to make their products and services accessible to people with disabilities, to the extent achievable. 5. Have the requirements of Sections 255, 716, or 718 had a beneficial effect, no effect, or an adverse effect on your company’s design, development, and deployment of new communications technologies? 5a. If you would like to expand on your response to Question 5, please include examples of products or services whose design, development, and deployment were either beneficially or adversely affected by the requirement to be made accessible. 6. In your opinion, what level of burden, if any, have the Section 255, 716, and 718 requirements posed to your company? 6a. If you would like to expand on your response to Question 6, please include examples where the requirement to make your product or service accessible represented what you see as a burden. Section 3: CVAA Requirements: Recordkeeping Please note: Section 717 states you “shall maintain, in the ordinary course of business and for a reasonable period, records of the efforts taken by such manufacturer or provider to implement sections 255, 716, and 718, including the following: (i) Information about the manufacturer’s or provider’s efforts to consult with individuals with disabilities. (ii) Descriptions of the accessibility features of its products and services. (iii) Information about the compatibility of such products and services with peripheral devices or specialized customer premise equipment commonly used by individuals with disabilities to achieve access.” Section 717 also requires that “An officer of a manufacturer or provider shall submit to the (Federal Communications) Commission an annual certification that records are being kept in accordance with” CVAA. 7. Have each of the following Section 717 recordkeeping requirements had a beneficial effect, no effect, or an adverse effect on your company’s design, development, and deployment of new communications technologies? (Select one answer in each row.) 7a. If you would like to expand on your response to Question 7, please include examples of products or services whose design, development, and deployment were either beneficially or adversely affected by the recordkeeping requirements. 8. In your opinion, what level of burden, if any, have each of the following Section 717 recordkeeping requirements posed to your company? (Select one answer in each row.) 8a. If you would like to expand on your response to Question 8, please include examples where the recordkeeping requirements represented what you see as a burden. The following statement is excerpted from an FCC Notice of Proposed Rulemaking: “We will not mandate any one form for keeping records (i.e., we adopt a flexible approach to recordkeeping). While we establish uniform recordkeeping and enforcement procedures for entities subject to Sections 255, 716, and 718, we believe that covered entities should not be required to maintain records in a specific format. Allowing covered entities the flexibility to implement individual recordkeeping procedures takes into account the variances in covered entities (e.g., size, experience with the Commission), recordkeeping methods, and products and services covered by the provisions.” (FCC 11-151, paragraph 223) 9. Would more specific guidance from FCC on how to satisfy Section 717’s recordkeeping requirements be helpful to your company? 10. Does your company maintain any of the following documents in support of the Section 717 recordkeeping requirements? (Select all that apply.) Marketing related documents, including: Customer and end user surveys and responses Engineering documents, including: Device and product design documents Meeting records that discuss relevant subject material, including: Presentations and documents studied during the meeting Relevant test documents, including: Documents that specifically address Section 255, 716, 717, or 718 requirements, including: Other documents not listed above: No response Section 4: Changes Since the Passage of CVAA Please note: CVAA was signed into law on October 8, 2010. 11. Due to the passage of CVAA, has your company increased or decreased its efforts to incorporate accessibility features in its products or services? Appendix III: Federal Communications Commission’s (FCC) Actions Implementing the Accessibility Complaint and Enforcement Procedures Required by the 21st Century Communications and Video Accessibility Act of 2010 (CVAA) Section 717 provisions related to FCC’s establishment of complaint and enforcement procedures FCC shall establish regulations within one year after CVAA’s enactment on October 8, 2010, that facilitate the filing of formal and informal complaints that allege a violation of Section 255, 716, or 718. (§ 717(a)) FCC shall establish procedures for enforcement actions with respect to such violations. (§ 717(a)) FCC shall not charge any fee to an individual who files a complaint. (§ 717(a)(1)) FCC’s actions implementing the provisions FCC released the procedures on October 7, 2011, and published them under title 47, Part 14 of the Code of Federal Regulations (47 C.F.R. Part 14) on December 30, 2011. The procedures became effective on January 30, 2012. FCC established enforcement procedures within the regulations setting forth the process for filing formal and informal complaints. As of March 2015, no informal or formal complaints have been filed with FCC. FCC does not charge a fee for filing an informal complaint. In the case of a formal complaint, FCC officials stated that they do not interpret any filing fee requirements to apply to asserting violations of sections 255, 716, or 718. FCC’s October 2011 Report and Order implementing Section 717 indicated that there was a filing fee associated with the formal complaint process. FCC’s procedures allow persons to file RDAs and informal complaints online and by telephone, mail, e-mail, and fax. FCC shall establish separate and identifiable electronic, telephonic, and physical receptacles for the receipt of complaints filed (§ 717(a)(2)) FCC shall investigate allegations in an informal complaint, and issue an order concluding the investigation, include a determination whether any violation occurred. (§ 717(a)(3)(B)) In the case of a violation determination, FCC may direct the company to bring its service or the next generation of its equipment or device into compliance within a reasonable time. (§ 717(a)(3)(B)(i)) Before FCC makes a violation determination, the company shall have a reasonable opportunity to respond to such complaint. (§ 717(a)(4)) Before issuing a final order, FCC shall provide the company a reasonable opportunity to comment on any proposed remedial action. (§ 717(a)(4)) After the filing of a formal or informal complaint against a company, FCC may request, and shall keep confidential, a copy of the records maintained by the company that are directly relevant to the complaint. (§ 717(a)(5)(C)) FCC established investigation actions in its procedures: 47 C.F.R. § 14.37 for informal complaints and 47 C.F.R. § 14.38 for formal complaints. FCC established this requirement in its procedures: 47 C.F.R. § 14.37(b)(1). As of March 2015, no informal or formal complaints have been filed with FCC. FCC established this requirement in its procedures: 47 C.F.R. § 14.36(b) for informal complaints and 47 C.F.R. § 14.38 for formal complaints. FCC established this requirement in its procedures: 47 C.F.R. § 14.37(c) for informal complaints. FCC established the requirement for requesting records in its procedures: 47 C.F.R. § 14.31(c) for informal and formal complaints. The procedures require companies to submit a request for confidentiality of these records. Section 717 provisions related to FCC’s establishment of complaint and enforcement procedures Nothing in FCC’s rules shall be construed to preclude a person who files a complaint and a company from resolving a formal or informal complaint prior to FCC’s final determination in a complaint proceeding. In the event of such a resolution, the parties shall jointly request dismissal of the complaint and FCC shall grant such request. (§ 717(a)(8)) FCC’s actions implementing the provisions FCC’s intent in adopting its procedures was to encourage the non- adversarial resolution of disputes. The informal complaint procedures (47 C.F.R. § 14.37) (1) allow a consumer and a company to talk to each other at any time to resolve the complaint by mutual agreement, and a consumer to withdraw an informal complaint and file another request for dispute assistance, and (2) require the consumer to notify the Enforcement Bureau if either of those two actions has occurred. The formal complaint procedures (47 C.F.R. § 14.50) allow both parties to meet and settle any matter in controversy by agreement and require that both submit a joint statement to FCC of the agreed upon proposals. In addition to the individual named above, Sally Moino (Assistant Director), Amy Abramowitz, Enyinnaya David Aja, Susan Baker, Russell Burnett, Justin Fisher, Sam Hinojosa, David Hooper, Stuart Kaufman, Josh Ormond, Dae Park, and Amy Rosewarne made key contributions to this report.
|
CVAA was enacted to help ensure that people with disabilities have full access to the benefits of technological advances in communications. The act required FCC to establish regulations and conduct public outreach and included a provision that GAO review FCC's efforts. GAO examined (1) the extent to which FCC established complaint and enforcement procedures within CVAA-required time frames and conducted public outreach, (2) the actions FCC has taken to ensure industry compliance with CVAA's recordkeeping provisions and to determine the level of industry compliance with accessibility requirements, and (3) stakeholders' views on the effect of CVAA's recordkeeping obligations on the development and deployment of new communications technologies. GAO reviewed FCC's regulations, orders, and biennial reports to Congress; surveyed a random sample of companies certifying compliance with CVAA requirements; assessed FCC's efforts to conduct public outreach against key practices GAO previously identified through an expert panel; and interviewed FCC officials and representatives from industry associations, consumer advocate groups, and disability research organizations selected based on CVAA-related comments they submitted to FCC. The Federal Communications Commission (FCC) established accessibility complaint and enforcement procedures within the time frames mandated by the 21st Century Communications and Video Accessibility Act of 2010 (CVAA) to ensure that people with disabilities would have access to advanced communications. FCC's complaint and enforcement procedures enable consumers to file (1) a pre-complaint Request for Dispute Assistance (RDA), (2) an informal complaint, or (3) a formal complaint if consumers believe a communications product or service is not accessible to people with disabilities. From October 8, 2013, to April 1, 2015, FCC received 48 RDAs and no informal or formal complaints. FCC has undertaken numerous efforts to inform the public about CVAA's protections and remedies by, for example, hosting seminars and webinars and publishing consumer guides on accessibility issues. However, GAO found FCC's efforts do not always align with key practices for conducting public outreach. In particular, FCC has not evaluated the effectiveness of its public outreach efforts. Without such an evaluation, FCC does not know the program's effectiveness in informing the public of the protections and remedies available under CVAA and thus cannot reasonably assure the quality, quantity, and timeliness of the outreach program. Evaluating the outreach efforts would also enable FCC to determine whether current resources allocated to the outreach program are appropriate or need adjustment. FCC has taken limited actions to ensure industry compliance with CVAA's recordkeeping provisions and does not know the extent to which industry is fully complying with the requirements to make products and services accessible. FCC established the Recordkeeping Compliance Certification and Contact Information Registry to help ensure industry compliance with recordkeeping requirements. Companies subject to any CVAA accessibility requirement must submit an annual certification to FCC that they are maintaining records of their efforts to make their products accessible through the Registry. FCC could not say whether industry is complying with CVAA accessibility requirements because FCC lacks an objective measure for making this determination. However, developing a measure might not be cost effective given that FCC has received no informal or formal complaints asserting non-compliance with these requirements. In FCC's 2014 biennial report to Congress, FCC based its determination of industry compliance on public comments and industry association reports. Stakeholders GAO surveyed and interviewed generally reported that CVAA's recordkeeping obligations have not affected the development and deployment of new communications technologies. Specifically, GAO estimated that between 59 and 70 percent of companies view CVAA's recordkeeping requirements as having had no effect on their development and deployment of new communications technologies. Overall, industry associations and disability advocates GAO interviewed generally agreed that accessibility improved since the passage of CVAA. Industry associations highlighted a number of association-led efforts to bring industry and consumers together to ensure that the needs of disabled consumers are being addressed. Advocates for people with disabilities indicated that there were still many ways in which the accessibility of communications technology could be further improved, but some believed that CVAA resulted in more widely available accessible technology. FCC should evaluate its public outreach efforts and ensure those efforts incorporate key practices. FCC concurred with the recommendation and intends to take action to address it.
|
NMFS’ mission is to act as a steward of the nation’s ocean resources and their habitats. This includes responsibility for managing recreational fisheries in federal waters. These waters generally include the United States Exclusive Economic Zone, which typically begins approximately 3 geographical miles from land and extends 200 nautical miles from land. Coastal states generally maintain responsibility for managing fisheries in waters that extend approximately 3 geographical miles from their coastlines. The extent of recreational fishing varies by region, with the greatest amount of marine recreational fishing taking place in the Gulf of Mexico, followed by the South Atlantic and Mid-Atlantic, according to NMFS statistics. Figure 1 shows NMFS statistics about the extent of marine recreational fishing activity overall and the locations of the highest levels of marine recreational fishing activity. overfished fisheries, protect essential fish habitat, and reduce bycatch, among other things. The 1996 act included requirements for NMFS and the councils to develop fisheries management plans for fish stocks and to establish required time frames for rebuilding fish stocks that are overfished. A reauthorization of the act was passed in 2006 and established further legal requirements to guide fisheries data collection and management, including mandates on the use of science-based annual catch limits. Under NMFS guidelines, plans should include accountability measures to prevent catch from exceeding the annual catch limit. These measures can include fishing season closures, closures of specific areas, changes in bag limits, or other appropriate management controls. Private recreational anglers use private boats and sites on shore, such as public docks or private boat clubs, to access marine recreational fisheries. The marine recreational fishing sector is divided between private anglers and the for-hire sector. Private anglers primarily access marine recreational fisheries by using private boats or by fishing from sites on shore. The for-hire sector includes both charter boats and “head boats.” Charter boats are chartered or contracted by anglers for a fishing trip for a flat fee regardless of the number of anglers on the boat. “Head boats” are usually large capacity multipassenger vessels that charge each angler a per person fee for a fishing trip. Private recreational anglers also rely on the for-hire sector, which consists of charter boats and “head boats.” Charter boats commonly carry six or fewer passengers who purchase the services of a boat and crew. “Head boats” carry more than six passengers, with each individual angler paying a fee to go fishing. NMFS has overall responsibility for collecting data to manage federal fisheries. It has several offices involved in fisheries data collection and management, including the Office of Science and Technology, six regional Fisheries Science Centers, and five regional offices. NMFS has numerous partners for collecting data to manage recreational fisheries, including coastal states and three interstate marine fisheries commissions. In addition, NMFS and these partners collaborate with regional fisheries information networks, such as the Gulf Fisheries Information Network and the Atlantic Coastal Cooperative Statistics Program, to collect and manage fisheries data. NMFS also collaborates with eight Regional Fishery Management Councils that are responsible for fisheries conservation and management in specific geographic regions of the country. In addition, NMFS collaborates with numerous other stakeholders, such as private anglers, charter boat operators, seafood dealers, nongovernmental organizations, and recreational fisheries associations, to gather input about fisheries data collection programs and management. Figure 2 shows key stakeholders involved in recreational fisheries data collection. NMFS and its stakeholders collect several types of data for use in recreational fisheries management. For example, information is collected on recreational fishing effort and catch rates. Effort measures the number of angler trips, while catch rates measure the average number and size of fish, by species, that are brought to shore, caught and used as bait, or discarded (i.e., caught but then released alive or dead). These data are used to estimate the total recreational fishing catch to determine the impact of recreational fishing activity on fish stock mortality and the changes that are occurring to the fish stock over time. Figure 3 shows how these data are used to estimate total catch. According to NMFS documentation, data on catch and discards are generally collected through shoreside interviews of anglers at public access fishing sites, primarily through NMFS’ MRIP Access Point Angler Intercept Survey, which covers the Atlantic and Gulf coasts from Louisiana to Maine, or through state survey programs. These data may also be collected through the use of onboard observers, typically on charter boats or head boats. Data on fishing effort are collected through MRIP or state programs, using methods such as phone or mail surveys, shoreside interviews, onboard observers, logbooks, boat and boat trailer counts, and electronic monitoring or electronic reporting tools. Given the involvement of the interstate fisheries commissions and states in data collection efforts, methods for collecting data on recreational fishing vary among states and regions. In addition, according to NMFS documentation, biological samples of fish specimens are collected for scientific analysis to provide information on the health and biology of fish stocks. For example, data are collected on the lengths, weights, and ages of fish samples. These samples are often collected during NMFS’ shoreside interviews of recreational anglers or by tagging fish to track after they are caught and released. Academic programs and cooperative research with the fishing industry are other sources of biological sampling data. In addition to collecting data on marine recreational fisheries, NMFS and its stakeholders, such as states, collect other types of data including data on commercial fisheries. Unlike recreational fisheries data, however, commercial fisheries data are collected through a census of the weight and value of all fish species sold to seafood dealers using a network of cooperative agreements with states. According to NMFS documentation, in some regions, state fishery agencies are the primary collectors of commercial fisheries data that they receive from seafood dealers who submit periodic reports on the amount and value of the various fish species they purchase. In addition, independently from recreational or commercial fishing data collection efforts, NMFS and its stakeholders also collect information on the abundance of fish stocks and environmental conditions in fish habitats, such as seafloors, open ocean water, and natural and artificial reefs. These data are used to determine the size, age composition, and distribution of fish stocks, and allow NMFS to track the total abundance of fish stocks over time. NMFS officials told us NMFS relies on its own research vessels or contracted commercial fishing vessels to collect abundance data. NMFS uses these various types of data to conduct fish stock assessments that estimate, among other things, the population of fish stocks, fish stock productivity, and biological reference points for sustainable fisheries. NMFS and the Regional Fishery Management Councils in turn use the fish stock assessments to examine the effects of fishing activities on the fish stocks and make determinations such as whether stocks are overfished and whether overfishing is occurring. According to NMFS documentation, the data are also used to support management decisions, such as setting limits on how many fish can be caught annually or determining the need to close a recreational fishery for a particular fish stock during an open fishing season, called an in-season closure, when annual catch limits are anticipated to be exceeded. In 2006, the National Research Council issued a report that reviewed NMFS’ marine recreational fisheries data collection programs and made numerous general and specific recommendations to address weaknesses. Among other things, the council recommended the redesign of all marine recreational fishing surveys funded by NMFS. In addition, the council recommended that NMFS improve its survey coverage by either developing a national registration of all saltwater anglers or by using new or existing state saltwater license programs that would provide appropriate contact information for all anglers fishing in all marine waters, both state and federal. The 2007 reauthorization of the Magnuson- Stevens Act included requirements for NMFS to take into consideration and, to the extent feasible, implement the recommendations in the National Research Council report. Subsequently, in October 2008, NMFS began implementing MRIP, managed in NMFS’ Office of Science and Technology, to collect recreational fisheries effort and catch data and develop estimates for use in fisheries management. MRIP was intended to coordinate collaborative efforts among NMFS and its various stakeholders to develop and implement an improved recreational fisheries statistics program. MRIP consists of a system of regional surveys that provide effort and catch statistics for use in the assessment and management of federal recreational fisheries. According to NMFS officials, because counting every recreational angler or observing every fishing trip is not possible, NMFS relies upon statistical sampling to estimate the number of fishing trips recreational anglers take and what they catch. The data gathered from the regional surveys are compiled to provide regional and national estimates. Under MRIP, certain states, including California, Oregon, and Washington, have implemented recreational fisheries data collection programs funded, in part, by NMFS; these data are also used to inform fisheries management. Also, some states have developed and implemented other recreational fisheries data collection programs funded, in part, through mechanisms such as fee-based fishery programs in those states. Figure 4 provides a timeline of key legislative and other events related to marine recreational fisheries data collection and management. Since the 2006 National Research Council report, NMFS and some state officials have identified several challenges related to collecting data to manage marine recreational fisheries, such as obtaining quality recreational fishing data to inform scientific analyses and produce credible effort and catch estimates. NMFS and some state officials also identified challenges with collecting recreational fisheries data in a timely manner to support certain recreational fisheries management decisions. In addition, NMFS and some state officials, as well as some other stakeholders such as private recreational anglers, identified challenges regarding how NMFS communicates with stakeholders about its marine recreational fisheries data collection efforts. Examples of NMFS’ challenges in obtaining quality recreational fishing data through MRIP to inform scientific analyses and produce credible effort and catch estimates include: Identifying the universe of recreational anglers. NMFS faces a challenge in obtaining complete information on the universe of recreational anglers. According to NMFS officials, MRIP created a national saltwater angler registry to obtain more complete information about recreational anglers. However, this registry does not include anglers if they are registered in states bordering the Atlantic Ocean and Gulf of Mexico because NMFS granted those states exemptions from the national registry. According to NMFS officials, NMFS relies on state angler registries to identify the universe of recreational anglers in those exempted states. However, some state angler registries offer exemptions from fishing permit requirements, such as for individuals under or over certain ages, and NMFS officials noted that not all anglers comply with state licensing and registration requirements. Therefore, these anglers do not appear on state angler registries. As a result, NMFS does not have a complete list of recreational anglers. Obtaining sufficient coverage in effort surveys. According to some state officials, NMFS faces challenges in ensuring that it covers the full range of anglers among the participants it selects to participate in fishing effort surveys so that they are representative of the overall angler population. For example, NMFS has relied on its Coastal Household Telephone Survey, which randomly selects participants from all potential household telephone numbers in coastal counties, to obtain information about shoreside and private boat fishing effort in the Gulf of Mexico and the Atlantic coast. As a result, the survey does not capture recreational anglers from noncoastal states that travel to fish in the Gulf of Mexico or Atlantic coast, or coastal resident anglers in households that do not have a landline phone. NMFS officials acknowledged this limitation with the Coastal Household Telephone Survey. Targeting a representative sample in shoreside surveys. According to NMFS officials, NMFS faces challenges in collecting data on a portion of the recreational fishing sector since it generally does not collect data on private property or at private-access fishing sites. According to NMFS officials and other governmental stakeholders, this is an issue in states that have many private-access sites, such as California and Florida, because there may be a significant portion of the recreational fishing sector that is not being surveyed. As a result of this limitation, according to NMFS officials and some state officials, NMFS relies on untested assumptions about, for example, catch and discard rates for anglers that use private- access fishing sites to develop recreational catch estimates. However, NMFS officials noted that survey data on fishing effort are collected from anglers regardless of whether they fish from public or private- access fishing sites. In addition, according to one state official, NMFS’ standard protocols for determining when and where to assign shoreside observers to conduct interviews may not take into account local fishing patterns and, therefore, observers may not be located in the right places at the right times to collect the most representative data. For example, according to this official, NMFS’ protocols for assigning shoreside observers do not account for the length of time anglers would typically take to reach federal waters and return from their trip. As a result, observers may not be at the shoreside when anglers return. Obtaining a sufficient number of survey responses and biological samples. According to NMFS and some state officials, NMFS faces the challenge of collecting a sufficient number of survey responses and samples in its effort and catch surveys. For example, some NMFS and state officials told us Coastal Household Telephone Survey response rates have been declining, and a 2014 report prepared for NMFS noted that response rates to the survey had “declined considerably” in the previous decade, which could increase the potential for bias in the data collected on recreational angler fishing effort. Also, one state official told us he does not believe NMFS assigns enough shoreside observers to collect the recreational angler catch and discard data needed to develop precise recreational catch estimates. In addition, another state official told us that the lack of shoreside observers has contributed to an insufficient amount of biological samples collected to adequately address scientific needs. Consistent with these views, in 2013 NMFS’ Southeast Fisheries Science Center identified a need for more fish tissue samples in its region to aid in assessing fish stock reproduction. Obtaining valid survey responses. According to some state and NMFS officials, obtaining valid survey responses can be challenging because they depend on anglers’ recollections of prior fishing events. NMFS officials told us that the accuracy of self-reported data (i.e., data that rely on participants providing responses based on personal observations) depends on the angler’s ability to recall events or to distinguish between different fish species. However, anglers may not be able to accurately recall details about fish they caught and then discarded, especially as time elapses or because of limited knowledge about fish species, and without independent validation or verification, that data may be inaccurate. According to NMFS officials, these challenges affect the Coastal Household Telephone Survey because the survey asks anglers how many saltwater fishing trips were taken in the previous 2 months, but it does not use observers or other mechanisms to independently validate and verify this self-reported data. Obtaining key recreational fisheries data. According to NMFS and some state officials, NMFS faces a challenge in collecting complete data on discards—that is, fish that are caught but then released— because of the difficulty of validating and verifying self-reported data as previously discussed. In light of this difficulty, Louisiana does not collect recreational angler discard data as part of its own recreational fisheries data collection program because of concerns about the quality of angler self-reported data, according to a state official. Even given the uncertainty in identifying the exact amount of discards, the number of discards can be substantial—for example, according to NMFS statistics, the majority of fish caught by marine recreational fishermen in 2013 were discarded. NMFS officials told us that discarded fish that have to return to great depths often experience high mortality rates due to barotrauma. As a result of limited information about the number of discarded fish and their mortality rates, according to NMFS officials, NMFS relies on assumptions about the mortality rates of discarded fish to produce or adjust recreational catch estimates. NMFS also faces challenges in collecting timely marine recreational fishing data to support certain fisheries management decisions, according to NMFS and some state officials we interviewed. According to NMFS officials, the Magnuson-Stevens Reauthorization Act of 2006 implemented new requirements that have greatly expanded the pressures on fisheries managers to rely on timely data to make decisions. However, according to NMFS and some state officials, NMFS’ data collection systems have not evolved quickly enough to support management decision making. For example, it takes 2 months to conduct the Coastal Household Telephone Survey, which collects data on recreational fishing effort in the Gulf of Mexico and the Atlantic coast, and about 45 days to analyze the data and produce recreational fishing estimates. According to NMFS and some state officials, as a result of these timing issues, NMFS managers do not have enough information to make informed decisions about whether to initiate in-season closures for certain fish stocks with annual catch limits in order to prevent anglers from exceeding those limits. State officials frequently highlighted this as a concern in managing the Gulf of Mexico red snapper, which is susceptible to in-season closures because of concerns about overfishing. According to NMFS documentation, this fishery has been subject to shortened federal fishing seasons over the last few years—including seasons of 9 days in 2014 and 10 days in 2015, compared with 75 days in 2009 and 42 days in 2013. NMFS, some state officials, and some other stakeholders, such as private recreational anglers, have also identified challenges in how NMFS communicates with stakeholders about its fisheries data collection efforts. For example, a fisheries official from Texas said that, although Texas provides NMFS with marine recreational fisheries data, NMFS does not clearly communicate how or if it uses those data. Some private recreational anglers also told us that NMFS has not always sufficiently communicated with the public about its activities, creating concerns about a lack of transparency regarding NMFS’ fisheries management decisions. For example, some private anglers told us they are confused because NMFS has not explained why it continues to shorten the Gulf of Mexico red snapper fishing season even though the red snapper population has increased. NMFS officials acknowledged that NMFS has not always clearly communicated with regional stakeholders to explain its decision- making processes, stating that this has contributed to the public’s misperceptions. As a result of the challenges that have been identified with collecting fisheries data, NMFS officials told us they face a lack of public confidence and trust in their ability to provide the data needed for managing recreational fisheries. For example, according to a Texas fisheries official, Texas withdrew from NMFS’ recreational data collection program and implemented its own data collection program in the late 1970s because it did not believe that NMFS’ data collection methods suited Texas’ needs for managing recreational fisheries. Similarly, in 2014, Louisiana withdrew from MRIP and implemented its own recreational fisheries data collection program, called LA Creel, because of concerns about MRIP data being able to support Louisiana’s needs for managing recreational fisheries, according to a Louisiana fisheries official. Similarly, according to state officials, Mississippi and Alabama have also independently initiated efforts to collect data on the abundance of certain fish, including red snapper, in artificial reefs off the coasts of these states because of concerns that NMFS’ current data collection methods underestimate the abundance of these fish stocks. Citing dissatisfaction with NMFS’ management of the Gulf of Mexico red snapper fishery, the states bordering the Gulf of Mexico released a proposal in March 2015 to transfer the responsibility for managing Gulf of Mexico red snapper from NMFS to these states. NMFS has taken several steps aimed at improving data collection to manage marine recreational fisheries and addressing challenges related to communicating with stakeholders. However, some data collection challenges persist, and NMFS does not have a comprehensive strategy to guide its efforts to improve recreational fisheries data collection. NMFS has taken steps to address some of the challenges it faces in collecting data for managing marine recreational fisheries, including steps aimed at collecting quality data to support scientific analyses and producing credible effort and catch estimates, improving the timeliness of data collection, and improving communication with stakeholders. However, even with the various steps NMFS has taken, agency officials said that some challenges persist. In April 2015, NMFS requested that the National Research Council review MRIP to determine the extent to which NMFS has addressed the recommendations in the 2006 National Research Council report. A NMFS official told us the National Research Council has initiated the review process, and NMFS expects the review to be completed in 2017. NMFS has taken several steps to address the challenges it faces in collecting quality data. To address the challenge of identifying the universe of recreational anglers, NMFS documents indicate that by October 2011 NMFS had entered into memoranda of agreement with states and United States territories that were exempt from the national registry requirements, whereby these states and territories agreed to submit their data on marine recreational fishing participants to NMFS for inclusion into the national registry. In 2011 and 2012, NMFS provided approximately 20 grants to states through the interstate marine fisheries commissions to support initial data quality improvement projects. Subsequently, in 2012 and 2013, NMFS received state angler registry data from each of the exempted Atlantic and Gulf Coast states and entered the data into the national registry database. During this same period, NMFS made recommendations to the states on improving their recreational angler databases. NMFS also continued to provide funds to the states through the commissions to support the initial data quality improvement projects, according to NMFS documents. To address both regional and national needs for effort and catch data, NMFS has supported the redesign of state and federally managed surveys in all regions. For example, in 2009, NMFS initiated a series of pilot studies to address declining participation rates in telephone recreational fishing effort surveys and potential gaps in the data that could skew survey results due to limitations in reaching coastal residences. NMFS conducted these pilot studies to determine whether mail survey methods for collecting recreational fishing effort data would improve estimates. In a July 2014 report, NMFS stated that the findings from the study indicated that mail survey response rates were nearly three times higher than the telephone survey response rates. Given these results, in May 2015, NMFS issued a plan for transitioning from the current Coastal Household Telephone Survey to a newly designed mail-based survey, referred to as the Fishing Effort Survey. According to NMFS documentation, NMFS expects the Fishing Effort Survey to be fully implemented by January 2018, as shown in figure 5. In 2013, NMFS also issued new protocols for the Access Point Angler Intercept Survey. Under these new protocols, NMFS assigns shoreside observers to specific locations at precise times to address potential data gaps related to where and when the data are collected. According to NMFS officials, the new peer-reviewed survey design is intended to provide complete coverage of fishing trips ending at public access sites with representative sampling of trips ending at different times of day. Also in 2013, NMFS initiated a science program review to help provide a systematic peer review of its fisheries data collection programs at its six regional Fisheries Science Centers and Office of Science and Technology. As part of this effort, peer review panels evaluated NMFS’ data collection and management programs in 2013, subsequently issuing a report identifying a number of crosscutting national challenges and making several recommendations to address them. For example, the report recommended that NMFS develop a plan for providing the data necessary for conducting fish stock assessments. NMFS has also initiated efforts to evaluate the potential of electronic monitoring and reporting to address quality of data challenges. For example, according to NMFS officials, as of October 2015, NMFS was working with stakeholders in Florida to test the use of a smartphone- and Internet-based electronic reporting tool called iAngler to collect and report data on recreational effort and catch. NMFS is also working with Texas on an electronic reporting tool called iSnapper to test the collection of self- reported catch data, according to NMFS officials. In addition, NMFS issued a policy directive in May 2013 to provide guidance on the adoption of electronic technologies to complement or improve existing fishery data collection programs. In 2013, NMFS began working with its regional Fisheries Science Centers to develop regional plans to identify, evaluate, and prioritize the implementation of electronic monitoring and reporting technologies. According to NMFS documents, each of NMFS’ regional offices, in consultation with the Fisheries Science Centers, issued implementation plans in January and February 2015 that include a focus on using electronic technologies to improve the quality of recreational fishing data and data timeliness. Figure 6 shows examples of electronic monitoring and reporting technologies. However, even with the various steps NMFS has taken, agency officials said that some challenges persist. For example, according to NMFS officials we interviewed, NMFS uses independent checks to either validate self-reported data or estimate a reporting error that can be used to produce unbiased estimates, but the agency faces challenges in independently validating and verifying self-reported angler data. In addition, NMFS officials told us the 2006 National Research Council report contains recommendations that the agency has not yet addressed, including developing methods for improving the accuracy of estimates for the number of discarded fish and addressing the potential bias resulting from the exclusion of private access sites from shoreside surveys. NMFS officials agreed that additional effort should be undertaken through MRIP to evaluate alternative methods for obtaining and verifying discard data. According to NMFS officials, they initiated a process in October 2015 for developing strategies to address these challenges. NMFS has begun taking steps to improve the timeliness of its recreational fisheries data to support certain fisheries management decisions but, according to NMFS officials and stakeholders, this data timeliness challenge has not been fully addressed. For example, according to NMFS documentation, in fiscal year 2015, NMFS began studying the feasibility of moving from a 2-month survey period to a 1-month survey period—that is, conducting the survey each month to collect data on the previous month’s fishing activity—in its new mail-based Fishing Effort Survey as a way to help reduce recall errors and improve the precision and timeliness of recreational fishing effort estimates. However, some stakeholders told us that NMFS’ new mail-based Fishing Effort Survey will still not provide enough timely data to inform in-season closure decisions for federal Gulf of Mexico red snapper seasons. NMFS officials acknowledged limitations with its approach, noting that in-season closure decisions are based on the previous year’s recreational fishing catch estimates. According to NMFS officials, beginning in 2013, NMFS coordinated a series of MRIP workshops with fisheries officials from Alabama, Florida, Louisiana, Mississippi, and Texas to discuss options for improving the timeliness of data to support Gulf of Mexico red snapper in-season closure decisions. NMFS officials told us that they will continue to collaborate with their Gulf state partners to develop supplemental surveys focused on red snapper that can be integrated with the more general MRIP survey approach. According to NMFS officials, NMFS and the Gulf of Mexico Fisheries Information Network recently developed a timeline that describes the process and timing for making key decisions about future red snapper specialized survey methods, as shown in figure 7. NMFS officials told us as of October 2015 the states concurred with the timeline. According to NMFS and a state official, addressing some of the data collection challenges related to quality and timeliness entails making trade-offs. For example, according to NMFS officials, NMFS also held a workshop in March 2011 with several recreational fishing stakeholders, such as states and councils, to address the need for more timely and precise updates in a short-season fishery. NMFS officials told us the workshop identified several ways in which improvements could be made, but they concluded that more resources beyond what MRIP could afford would be needed to implement those improvements. NMFS’ new Fishing Effort Survey collects data on recreational fishing effort that targets many fish stocks, including some that do not need timely data necessary to make fishery management decisions within a shortened federal fishing season. However, according to NMFS officials and a state official, to implement a separate survey that specifically targets Gulf of Mexico red snapper would likely entail adding additional resources to this effort that would need to be taken from other surveys, such as the Fishing Effort Survey. According to NMFS officials, trade-offs also are often necessary to balance the competing needs of state and federal fisheries management and, as a result, NMFS prioritizes among competing demands for data. NMFS has attempted to address the need to understand the trade-offs involved in data collection; according to NMFS documentation, tools intended to help evaluate possible resource allocation trade-offs were expected to be available for use in 2014. However, according to NMFS officials, the tools were not in place as of October 2015, and NMFS has not determined when the tools will be available. The officials said that the tools were being developed in collaboration with academia, but the project stalled because the project leader left the academic institution, and the institution has not yet found a replacement. NMFS has also taken steps to improve communication with recreational fisheries stakeholders about recreational data collection. NMFS has worked with its MRIP Executive Steering Committee to address priority communication initiatives through various MRIP teams. For example, the MRIP communications and education team plans to implement a communications strategy—entailing various communication activities such as webinars—to support the transition from the Coastal Household Telephone Survey to MRIP’s new mail-based Fishing Effort Survey. According to NMFS officials, the agency is developing an MRIP strategic communications plan to guide its transition to the Fishing Effort Survey that was expected to be finalized by the end of October 2015. To further enhance MRIP communications, in 2014, the MRIP communications and education team began restructuring its communications network by developing MRIP communication teams at the regional level. Some of NMFS’ steps to improve communication have resulted in increased collaboration with recreational fisheries stakeholders, according to NMFS and state officials. For example, according to a state fisheries official, NMFS coordinated with the state to provide state officials greater input in determining observer assignment schedules and locations as part of the new protocols for the Access Point Angler Intercept Survey. NMFS officials told us that they are also working collaboratively with Louisiana to perform a side-by-side comparison of MRIP data with data collected under Louisiana’s LA Creel data collection program, to determine whether LA Creel can be used as an alternative to MRIP surveys. According to NMFS officials, in early 2016, NMFS and Louisiana plan to evaluate the results of the side-by-side comparison to determine next steps. Regarding stakeholder concerns about NMFS’ lack of data on fish stock abundance in reef habitats, NMFS officials told us that NMFS plans to use data collected by academic partners on red snapper abundance on artificial reefs in its Gulf of Mexico red snapper fish stock assessment. NMFS also has worked with the Atlantic States Marine Fisheries Commission and the Atlantic Coastal Cooperative Statistics Program to transition from a NMFS-led data collection system to a state-led data collection approach. In 2016, according to a NMFS official, the Atlantic Coast states will assume responsibility for conducting the Access Point Angler Intercept Survey shoreside interviews to collect marine recreational fishing data from anglers, and NMFS’ role will be to review, certify, and provide funds to support these data collection efforts. NMFS is also placing renewed emphasis on collaborating with its regional partners to determine future data collection needs and priorities for improving recreational fisheries effort and catch surveys, according to NMFS documents. For example, NMFS’ 2013-2014 MRIP implementation plan recommended establishing a hybrid approach to MRIP data collection. Under this approach, NMFS is to maintain a central role in developing and certifying survey methods and establishing national standards and best practices for data collection, while regions—through the regional fishery information networks or their equivalent—are to be responsible for selecting survey methods and managing data collection. According to NMFS officials and NMFS documentation, NMFS staff participated in a workshop in July 2013 to discuss the initial planning stages for developing this new regional approach to recreational fisheries data collection. According to NMFS officials, NMFS is developing MRIP Regional Implementation Plans to address regional data collection needs and priorities. The NMFS officials said that the West Coast region is scheduled to have a Regional Implementation Plan in early 2016. The officials said the Atlantic and Gulf Coast regions support the new approach to data collection and plan to complete their respective MRIP Regional Implementation Plans in 2016. As part of the new hybrid MRIP data collection approach, NMFS is in the process of identifying regional recreational fisheries data collection funding priorities. Challenges related to how NMFS communicates with stakeholders, however, persist. For example, some Gulf Coast state fisheries officials expressed concerns that NMFS has not provided sufficient information to improve communication regarding its recreational fisheries data collection activities. One state fisheries official said that NMFS has made some progress working with stakeholders to identify MRIP initiatives to improve recreational fisheries data collection, but it has not adequately communicated how it intends to coordinate and collaborate with its stakeholders to implement MRIP initiatives. Some stakeholders continue to express concerns that NMFS is not adequately communicating its process for developing Gulf of Mexico red snapper catch and effort estimates. For example, some stakeholders cited the presence of larger and more numerous red snapper in the Gulf of Mexico and do not understand the need for continued catch limits and fishing restrictions. NMFS officials told us that, although the Gulf red snapper population is rebounding, and the average weight of red snapper that are caught by anglers has increased, NMFS’ most recent stock assessment confirms that Gulf red snapper continue to be overfished. Therefore, as required by the Magnuson-Stevens Act, red snapper continue to be managed under a stock rebuilding plan. According to these officials, annual catch limits for red snapper are being reached more quickly due to several factors, including higher catch rates and more fishing effort being directed at the more abundant rebuilding stock. This has required even shorter fishing seasons despite increasing stock abundance, as well as corresponding increases to annual catch limits. NMFS officials stated that, in response to a history of exceedance of annual red snapper catch limits and litigation, NMFS is now setting the length of the red snapper fishing season based on a recommendation by the Gulf of Mexico Fishery Management Council to use a buffer of 20 percent of the annual catch limit. This buffer is intended to account for uncertainty resulting from the difficulty of obtaining timely and precise catch estimates, as well as uncertainty stemming from state regulations that provide for longer seasons in state waters. NMFS officials acknowledged that achieving stakeholder understanding of this complex process is an ongoing concern, but they told us they plan to continue communicating with stakeholders to help convey the rationale behind NMFS’ fisheries management decisions. NMFS has taken steps aimed at addressing several data collection challenges, but it does not have a comprehensive strategy to guide its efforts to improve recreational fisheries data collection. The Government Performance and Results Act Modernization Act of 2010 requires, among other things, that federal agencies develop long-term strategic plans that include agency-wide goals and strategies for achieving those goals. Our body of work has shown that these requirements also can serve as leading practices at lower levels within federal agencies, such as at NMFS, to assist with planning for individual programs or initiatives that are particularly challenging. Taken together, the strategic planning elements established under the act and associated Office of Management and Budget guidance, and practices we have identified, provide a framework of leading practices in federal strategic planning and characteristics of good performance measures. These practices include defining a program’s or initiative’s goals, defining strategies and identifying the resources needed to achieve the goals, and developing time frames and using performance measures to track progress in achieving them and inform management decision making. Furthermore, key practices related to communication call for communicating information early and often and developing a clear and consistent communications strategy to help develop an understanding about the purpose of planned changes, build trust among stakeholders and the public, cultivate strong relationships, and enhance ownership for transition or transformation. According to a NMFS official, the initial 2008 MRIP implementation plan and the subsequent updates are the key documents used to guide NMFS’ recreational fisheries data collection efforts. However, based on our review, NMFS’ MRIP implementation plans do not constitute a comprehensive strategy for improving recreational fisheries data collection consistent with the framework previously discussed. For example, the implementation plans do not consistently and clearly define NMFS’ goals, identify the resources needed to achieve the goals, or develop time frames or performance measures to track progress in achieving them. Based on our analysis, NMFS does not have a comprehensive strategy because it has been focused primarily on implementing the recommendations of the 2006 National Research Council report. A NMFS official confirmed that MRIP initially focused on implementing the recommendations in the 2006 National Research Council report and meeting the requirements to improve recreational fisheries data collection as described in the Magnuson-Stevens Reauthorization Act that was passed in 2006. According to NMFS officials, the agency’s first priority was to address the recreational fisheries survey design issues identified in the 2006 National Research Council report. Specifically, NMFS determined that it would first design, test, review, certify, and implement new survey designs, such as the new mail-based Fishing Effort Survey. As previously discussed, NMFS intends to transition to a regional data collection approach whereby the agency will collaborate with regional stakeholders, such as states, to identify regional data collection needs. NMFS officials told us that, in hindsight, NMFS could have benefited from a more robust strategic planning approach to MRIP implementation and stated that NMFS recognizes the need to enhance its strategic planning as it begins to transition to a regional data collection approach. NMFS officials told us that NMFS intends to develop strategic planning documents to guide future individual initiatives, using NMFS’ experiences with the transition to the new mail-based Fishing Effort Survey as a template, but they did not provide information about how, or whether, they planned to integrate these documents into a comprehensive strategy or how they would communicate such a strategy to NMFS’ stakeholders. Without a comprehensive strategy, NMFS may have difficulty ensuring that the variety of steps it is taking to improve data collection are prioritized so that the most important steps are undertaken first and may find it difficult to determine the extent to which these steps will help address challenges. Further, without communicating the strategy and NMFS’ progress in implementing it, NMFS may have difficulty building trust among its stakeholders, and these stakeholders may have difficulty tracking the agency’s efforts. Recognizing the importance of collecting quality and timely data at an acceptable cost to guide recreational fisheries management and conduct fish stock assessments, NMFS has taken many steps to improve its data collection, such as funding several pilot programs to test alternative data collection methods. NMFS has also initiated a fundamental shift in its data collection approach, envisioning a standard-setting and oversight role for NMFS rather than actual data collection, which is to be carried out by partners. However, NMFS does not have a comprehensive strategy to guide the implementation of its various efforts. Without a comprehensive strategy and associated performance measures to assess progress, NMFS may have difficulty ensuring that the variety of steps it is taking to help address the challenges it faces are prioritized so that the most important steps are undertaken first. Likewise, NMFS may have difficulty determining the extent to which these steps will help address challenges or if a different approach may be needed. Moreover, without clearly communicating the strategy to its stakeholders, NMFS may find it difficult to build trust, potentially limiting its ability to effectively implement MRIP improvement initiatives that rely on data collection partners. To improve NMFS’ ability to capitalize on its efforts to improve fisheries data collection for managing marine recreational fisheries, we recommend that the Secretary of Commerce direct NOAA’s Assistant Administrator for Fisheries to develop a comprehensive strategy to guide NMFS’ implementation of its marine recreational fisheries data collection program efforts, including a means to measure progress in implementing this strategy and to communicate information to stakeholders. As part of this strategy, NMFS should clearly identify and communicate programmatic goals, determine the program activities and resources needed to accomplish the goals, and establish time frames and performance measures to track progress in implementing the strategy and accomplishing goals. We provided a draft of this report to the Department of Commerce for comment. In its written comments (reproduced in app. II), NOAA, providing comments on behalf of Commerce, agreed with our recommendation that NMFS develop a comprehensive strategy to guide the implementation of its marine recreational fisheries data collection program efforts. NOAA stated that it agrees that transitioning from a primarily research and development focused program to one that is more focused on implementing improvements to recreational fisheries data collection presents an opportunity to engage in strategic planning. Specifically, NOAA stated it will work with its regional stakeholders over the next year to develop MRIP implementation plans that include milestones, timelines, performance metrics, and resource needs. In addition, NOAA stated that a new National Research Council review of its recreational fisheries data collection program will help to inform its strategic planning effort. NOAA also provided three general comments. First, NOAA stated that our report disproportionally included interviewees from the Gulf Coast, which may weigh the report’s conclusions differently than if other regions were more fully represented. As noted in our scope and methodology appendix (app. I), we selected federal and state agencies and regional organizations to interview based on such factors as geographic representation and locations of large volumes of recreational fishing. According to NMFS statistics, the largest volumes of recreational fishing are in the Gulf of Mexico. As a result, we believe that our selection of agencies and organizations, while not nationally representative, nevertheless provides an appropriate set of perspectives on recreational fisheries management. Second, NOAA stated that it interpreted our statement that we did not conduct a technical evaluation to mean that we are suggesting that a technical evaluation is needed to determine whether NMFS has appropriately prioritized its recreational fisheries data collection challenges. We did not conduct a technical evaluation because it was not within the scope of our review, and it was not our intent to suggest that a technical evaluation is needed. Third, NOAA stated that, while the report identifies several unaddressed recreational fisheries data collection challenges, it does not mention that the challenges require funding levels above the current MRIP budget. Addressing whether NMFS funding levels are sufficient to address the data collection challenges it faces was not within the scope of our review. We do, however, note in our report the importance of making trade-offs in addressing challenges and allocating resources. NOAA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Commerce, the NOAA Assistant Administrator for Fisheries, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to examine (1) the challenges that have been identified with the National Marine Fisheries Service’s (NMFS) data collection efforts for managing marine recreational fisheries and (2) the steps NMFS has taken to improve data collection and challenges that remain. To conduct our work, we reviewed and analyzed relevant laws, agency policies, guidance, and other documentation related to fisheries data collection, including documentation related to specific federal and state marine recreational fisheries data collection projects. We also reviewed previous GAO work related to fisheries management. To determine the challenges that have been identified with NMFS’ data collection efforts, we first reviewed reports and evaluations of NMFS’ data collection programs issued since 2006 from the National Research Council, the Department of Commerce Inspector General, NMFS, states, and independent consultants and assessed the extent to which they discussed data collection challenges. Of these reports, we relied primarily on the findings of the National Research Council and NMFS to identify data collection challenges. To obtain insights into the challenges identified in these documents, as well as to obtain information on any additional challenges, we interviewed officials from NMFS headquarters and three of NMFS’ six regional Fisheries Science Centers (Northeast, Northwest, and Southeast); representatives of three of the eight Regional Fishery Management Councils (Gulf of Mexico, Pacific, and South Atlantic) and all three interstate Marine Fisheries Commissions (Atlantic, Gulf, and Pacific States); and officials from state fisheries agencies in Alabama, Florida, Louisiana, Mississippi, North Carolina, Rhode Island, Texas, and Washington. We selected federal and state agencies and regional organizations to interview based on such factors as geographic representation, locations of large volumes of recreational fishing, and representation from key data collection and management stakeholders. The views of representatives from the agencies and organizations we contacted are not generalizable to other agencies and organizations, but they provided various perspectives on recreational fisheries management. In addition, to obtain additional information about data collected by the recreational fishing sector and challenges associated with data collection, as well as to obtain views on recreational fisheries data collection generally, we interviewed 22 nongovernmental marine recreational fisheries stakeholders. Of these stakeholders, 17 had expressed interest in, or concerns about, NMFS’ recreational fisheries data collection to congressional staff. These stakeholders added to the geographic variation and the recreational fishing sectors represented in our review, but their views do not represent the views of NMFS stakeholders generally. To supplement views on recreational fisheries data collection, we interviewed 5 additional stakeholders, including 4 stakeholders identified by NMFS and 1 stakeholder we identified through our previous work on fisheries management. The 22 stakeholders we interviewed included charter boat owners, private recreational anglers, members of academia, and advocacy groups, among others, and represented various geographic locations and different recreational fishing sectors. The NMFS statistical surveys used to collect data for managing recreational fisheries cover a wide range of methods, apply to a wide diversity of locations and often entail in-depth technical knowledge about fisheries data collection. For these reasons, we did not conduct a technical evaluation of these challenges or assess their technical validity. To determine the steps NMFS has taken to improve data collection and challenges that remain, we conducted interviews as described above and reviewed NMFS’ reports and other documents. Specifically, we reviewed NMFS’ strategic plans, recreational fisheries planning documents, and recreational fisheries data collection program documents. We compared this information with the framework of leading practices in federal strategic planning contained in the Government Performance and Results Act of 1993, the Government Performance and Results Act Modernization Act of 2010, and Office of Management and Budget guidance. We also compared this information to key practices related to communication we identified in previous reports. Consistent with our approach to the previous objective, we did not conduct a technical evaluation of NMFS’ steps to improve data collection or assess the appropriateness of those steps in light of the challenges NMFS faces. We conducted this performance audit from July 2014 to December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Gaty (Assistant Director), Steve Secrist (Assistant Director), Leo Acosta (Analyst-in- Charge), Mark Braza, Joseph Capuano, Elizabeth Curda, John Delicath, Richard Johnson, Jerry Leverich, Jeanette Soares, and Sara Sullivan made contributions to this report.
|
Almost 11 million anglers made nearly 71 million marine recreational fishing trips in the continental United States in 2013. Pressure on many fish stocks from fishing has increased demand for quality and timely data that can be used to assess the status of various fish stocks as part of managing marine recreational fisheries. The many modes of marine recreational fishing—in which anglers fish from private boats or boats with guides, the shoreline, private property, and public docks—make collecting the data needed to effectively manage recreational fisheries both complex and challenging. GAO was asked to review NMFS' marine recreational fisheries data collection program. This report examines (1) challenges that have been identified with the agency's data collection efforts for managing marine recreational fisheries and (2) steps the agency has taken to improve data collection and challenges that remain. GAO reviewed laws, policies, and guidance related to federal and state recreational fisheries data collection methods; reviewed NMFS and other documents on recreational fisheries data collection; and interviewed a nongeneralizable sample of federal and state recreational fisheries officials and other stakeholders, selected to provide geographic representation, among other things, to obtain their views on NMFS' data collection efforts. The National Marine Fisheries Service (NMFS) within the Department of Commerce faces several challenges related to fisheries data collection, according to reports GAO reviewed and NMFS officials and stakeholders GAO interviewed. These challenges include collecting quality recreational fishing data that are timely for managing marine recreational fisheries and communicating with stakeholders. Regarding the collection of quality data, for example, NMFS faces a challenge identifying the universe of anglers from which to collect information about their marine recreational fishing activity. NMFS relies in part on state registries to identify anglers, but some states exempt certain anglers from registering, and therefore NMFS does not have a complete list of recreational anglers. NMFS officials and other stakeholders have also identified challenges in communicating with stakeholders in collecting recreational fisheries data. For example, several stakeholders told GAO that NMFS has not always communicated with the public about its activities, creating concerns about a lack of transparency regarding NMFS' fisheries management decisions. Reflecting this challenge, in 2014, Louisiana withdrew from the federal fisheries data collection program and implemented its own program because of concerns about federal recreational fisheries data, according to a Louisiana fisheries official. NMFS has taken several steps aimed at improving data collection to manage marine recreational fisheries and addressing challenges related to communicating with stakeholders. For example, to help improve the quality of the state data it relies on to identify the universe of anglers, NMFS made recommendations to states on improving their recreational angler databases and provided funds to the states to support data quality improvement projects, according to NMFS documents. NMFS has also taken steps to improve communication, including working with Louisiana to perform a side-by-side comparison of federal data with Louisiana's data to determine whether Louisiana's data can be used as an alternative to federal data. However, some challenges persist, including challenges in validating data the NMFS collects and communicating about upcoming NMFS initiatives. More broadly, the agency does not have a comprehensive strategy to guide its efforts to improve recreational fisheries data collection. Such a strategy is consistent with the framework of leading practices in federal strategic planning, as described in the Government Performance and Results Act Modernization Act of 2010, Office of Management and Budget guidance, and practices GAO has identified. Based on GAO's discussions with NMFS officials and review of NMFS documents, the agency has not developed a comprehensive strategy because it has been focused on other priorities such as improving its data collection methods. NMFS officials told GAO that NMFS recognizes the need to enhance its strategic planning but did not provide information about how, or whether, they plan to develop a comprehensive strategy. Without a comprehensive strategy that articulates NMFS' goals to improve data collection and methods for measuring progress toward the goals, NMFS may have difficulty ensuring that the various steps it is taking to improve data collection are prioritized so that the most important steps are undertaken first, and it may find it difficult to determine the extent to which these steps will help it address the challenges it faces. GAO recommends that NMFS develop a comprehensive strategy to guide its data collection efforts. The agency agreed with GAO's recommendation.
|
In our reviews of embassy staffing issues during the 1990s, we found that the Department of State and some other agencies operating overseas lacked clear criteria for staffing overseas embassies. Other reviews reached similar conclusions. In early 1999, the Accountability Review Boards that investigated the bombings of two U.S. embassies in East Africa concluded that the United States should consider adjusting the size of its embassies and consulates to reduce security vulnerabilities. Later that year, the Overseas Presence Advisory Panel (OPAP) recommended that rightsizing be a key strategy to improve security and reduce operating costs. In August 2001, President Bush announced that achieving a rightsized overseas presence was one of his 14 management priorities. The September 2001 terrorist attacks on the United States added impetus for this initiative. In May 2002, we testified before the Subcommittee on National Security, Veterans Affairs, and International Relations, House Committee on Government Reform, on a proposed framework for determining the appropriate number of staff to be assigned to a U.S. embassy. To further assess the applicability of GAO’s rightsizing framework, we selected the embassies in Dakar, Senegal; Banjul, The Gambia; and Nouakchott, Mauritania. We selected these embassies based on OMB’s questions about whether our framework can be uniformly applied at all posts, and because experts suggest that rightsizing in Africa is a significant challenge. The embassy in Dakar is a medium-sized post that provides regional support to several embassies including Cape Verde, Guinea, The Gambia, Mali, Mauritania, and Sierra Leone. Embassy Dakar has about 90 direct-hire Americans and 350 local hires working in seven U.S. agencies. Embassy Banjul is a special embassy program post with 7 American direct hires and about 65 local hires. Embassy Nouakchott is also a special embassy program post with 14 American direct hires and about 42 local hires. Our work at the three posts in West Africa further demonstrated that our framework and corresponding questions can provide a systematic approach for assessing overseas workforce size and identifying options for rightsizing in developing countries. We identified examples of the specific security, mission, and cost issues at each post, which, when considered collectively, highlighted staffing issues and rightsizing options to consider. (See app. I for more details on our findings at each of the embassies.) The ability to protect personnel should be a critical factor in determining embassy staffing levels. Recurring security threats to embassies and consulates further highlight the importance of rightsizing as a tool to minimize the number of embassy employees at risk. Our security questions address a broad range of issues, including the security of embassy buildings, the use of existing secure space, and the vulnerabilities of staff to terrorist attack. Officials at the embassies in Dakar, Banjul, and Nouakchott agreed that security vulnerability should be a key concern in determining the size and composition of staffing levels at the posts and should be addressed in conjunction with the other rightsizing elements of mission and cost. Each post has undergone security upgrades since the 1998 embassy bombings to address deficiencies and ensure better security. However, until facilities are replaced as part of the long-term construction plan, most will not meet security standards. For example, many buildings at overseas posts do not meet the security setback requirement. At the Dakar post, responses to the framework’s security questions identified significant limitations in facility security and office space that likely limit the number of additional staff that could be adequately protected in the embassy compound. This is a significant issue for the embassy in Dakar given its expanding regional role and projected increases in staffing to accommodate visa workload and increasing personnel at non-State agencies, as well as because planned construction of a new secure embassy compound will not be completed until at least 2007. In contrast, Embassy Banjul has unused office space that could accommodate additional staff within the embassy compound. Although U.S. interests are limited in The Gambia, a staff increase could be accommodated if decision makers determine that additional staff are needed as a result of answering the framework’s questions. In Nouakchott, existing space is limited but adequate. However, officials raised concerns about the security risks associated with the expected increase in personnel on the compound. The placement and composition of staff overseas must reflect the highest priority goals of U.S. foreign policy. Questions in this section of our framework include assessing the overall justification of agency staffing levels in relation to embassy priorities and the extent to which it is necessary for each agency to maintain or change its presence in a country, given the scope of its responsibilities and its mission. Related questions include asking if each agency’s mission reinforces embassy priorities and if an agency’s mission could be pursued in other ways. Responses to the questions showed that there are key management systems for controlling and planning staffing levels currently in use at overseas posts, but they are not designed or used to systematically address these staffing, priority, and mission issues. One such management system is the National Security Decision Directive- 38 (NSDD-38). NSDD-38 is a long-standing directive that requires non-State agencies to seek approval by chiefs of missions on any proposed changes in staff. NSDD-38 does not, however, direct the Chief of Mission to initiate an assessment of an agency’s overall presence. The Overseas Presence Advisory Panel reported that the directive is not designed to enable ambassadors to make decisions on each new agency position in a coordinated, interagency plan for U.S. operations at a post. Post officials agreed that the NSDD-38 system has only limited usefulness for controlling staffing levels and achieving rightsizing objectives. Another management system is the Department of State’s Mission Performance Plan (MPP). The MPP is the primary planning document for each overseas post. State’s MPP process has been strengthened significantly to require each embassy to set its top priorities and link staffing and workload requirements to those priorities. However, the MPP does not address rightsizing as a management issue or provide full guidance to posts for assessing overall staffing levels, by agency, in relation to a post’s mission. At the three posts we visited, staffing requests were addressed in the MPPs in the context of each post’s mission performance goals; however, these documents did not address the security and cost trade-offs associated with making such staffing changes. In addition, Embassy Dakar has an increasing regional role, which is not sufficiently addressed in the MPP. Finally, the Department of State’s Overseas Staffing Model provides guidance for State in assigning its full-time American direct hire staff to posts, but it does not include comprehensive guidance on linking staffing levels to security, workload requirements, cost, and other elements of rightsizing. It also does not provide guidance on staffing levels for foreign service nationals or for other agencies at a post. Using various methods for addressing staffing and other key resource requirements is not effective in planning for or controlling growth. The Deputy Chief of Mission at Embassy Dakar agreed, as this has resulted in growth beyond the post’s capacity. Specifically, The Department of State has added at least seven American direct-hire positions to the post, and non-State agencies operating in Dakar have added another six positions over the last year. In addition, post officials project more increases in personnel by fiscal year 2004 to accommodate other agencies interested in working out of Dakar. Post officials agreed that a more systematic and comprehensive approach might improve the post’s ability to plan for and control growth. Responses to the framework’s questions by Banjul and Dakar consular officers also indicated that they could further explore processing all nonimmigrant visas from the Dakar post, particularly since Dakar has done so in the past on a temporary basis. Neither post’s MPP discussed the possibility of covering these functions on a regional basis from Dakar, yet doing so would relieve Banjul’s consular officer from processing nonimmigrant visas, thereby allowing more time for political and economic reporting. Thus, the post might not need to request a junior officer to handle such reporting. However, Banjul post officials said this arrangement would not be feasible for a variety of reasons. Nevertheless, their assessment illustrates the importance of weighing the benefits and trade-offs of exercising rightsizing options. Officials at both posts also agreed that applying the rightsizing questions, as part of the post’s annual MPP process, would result in an improved and more systematic approach for addressing rightsizing issues. The cost section of our framework includes questions that involve developing and consolidating cost information from all agencies at a particular embassy to permit cost-based decision-making. Without comprehensive cost data, decision makers cannot determine the correlation between costs and the work being performed, nor can they assess the short- and long-term costs associated with feasible business alternatives. At all of the posts, we found there was no mechanism to provide the ambassador or other decision makers with comprehensive data on State’s and other agencies’ cost of operations. For example, complete budget data that reflect the cost of employee salaries and benefits and certain information management expenses for each agency at post were not available. Further, we found that embassy profile reports maintained by State’s Bureau of Administration contained incomplete and inaccurate information for each embassy’s funding levels and sources. Officials at each post agreed that it is difficult to discern overall costs because data are incomplete and fragmented across funding sources, thereby making it difficult for decision makers to justify staffing levels in relation to overall post costs. In view of Embassy Dakar’s plans to expand its regional responsibilities, embassy officials said it would be beneficial to document and justify the cost effectiveness of providing support to posts in the region. The type of support can be substantial and can have significant implications for planning future staffing and other resource requirements. For example, Embassy Nouakchott relies heavily on Embassy Dakar for budget and fiscal support, security engineering, public affairs, medical/medevac services, and procurement/purchasing, in addition to temporary warehousing for certain goods. OMB and the Department of State recognize that lack of cost-based decision-making is a long-standing problem. As part of the President’s Management Agenda, they are working to better identify the full operating costs at individual posts and improve cost accounting mechanisms for overseas presence. Our work demonstrates that responses to our questions could be used to identify and exercise rightsizing actions and options, such as adjusting staffing requirements, competitively sourcing certain commercial goods and services, and streamlining warehousing operations. Examples of identifying and exercising rightsizing options include the following: Embassy space and security limitations in Dakar suggest that planned increases in staff levels may not be feasible. If Embassy Dakar used our framework to complete a full and comprehensive analysis of its regional capabilities, in conjunction with analyses of mission priorities and requirements of other embassies in West Africa, then staffing levels could be adjusted at some of the posts in the region. One rightsizing option includes having Embassy Banjul’s visa services handled from Dakar. The general services officers at the Dakar and Banjul posts agreed that our framework could be used to identify competitive sourcing opportunities in their locations. One rightsizing option includes assessing the feasibility of competitively sourcing the work of currently employed painters, upholsterers, electricians, and others to yield cost savings and reduce staff requirements. This could have a particularly significant impact at the Dakar post, which employs more than 70 staff who are working in these types of positions. The Dakar and Banjul embassies operate substantial warehousing and maintenance complexes. Post officials said that operations and staffing requirements at these government-owned facilities could be potentially streamlined in a number of areas. The Department of State and other agencies maintain separate nonexpendable properties, such as furniture and appliances in Dakar, while the Department of State and Peace Corps maintain their own warehouses in the same compound in Banjul. Department of State logistics managers and post general services personnel agree that pooling such items could potentially reduce overall inventories, costs, and staffing requirements. Relocating staff, competitively sourcing goods and services, and other rightsizing options should be based on a full feasibility and cost analysis, and thus we are not recommending them in this report. However, such rightsizing options deserve consideration, particularly in view of Embassy Dakar’s concerns about how to manage anticipated increasing regionalization, the general security threats to embassies around the world, and the President’s Management Agenda’s emphasis on reducing costs of overseas operations. The need for a systematic approach to rightsizing the U.S. overseas presence has been a recurring theme in developing our framework. We have noted that the criteria for assigning staff to individual overseas posts vary significantly by agency and that agencies do not fully and collectively consider embassy security, mission priorities, and workload requirements. At the three embassies we visited in West Africa, we found that rightsizing issues have not been systematically assessed as part of the embassy management and planning process. However, The Department of State has taken several steps that help lay the groundwork for such a process by refining its overseas post MPP guidance. That guidance, applicable to posts in all countries, was recently strengthened and now directs each embassy to set five top priorities and link staffing and workload requirements to fulfilling those priorities. Chiefs of Mission also certify that the performance goals in their MPPs accurately reflect the highest priorities of their embassies. This is consistent with questions in our framework addressing program priorities. The guidance does not, however, identify rightsizing as a management goal or explicitly discuss how rightsizing issues of security, mission, cost, and options should be addressed. For example, it does not ask embassies to formally consider the extent to which it is necessary for each agency to maintain its current presence in country, or to consider relocation to the United States or regional centers, given the scope of each embassies’ responsibilities and missions. Officials at the posts in West Africa generally agreed that applying the framework and corresponding questions could result in an improved and more systematic approach to rightsizing. They agreed that the framework can be adjusted to consider emerging rightsizing issues and staffing conditions. For example, at Embassy Dakar, the regional security officer suggested including a question addressing the capacity of the host country police, military, and intelligence services as part of the physical and technical security section. Other officials suggested including a question regarding the extent to which health conditions in the host country might limit the number of employees that should be assigned to a post. Officials in the Department of State’s Bureau of African Affairs generally agreed that applying our questions provides a logical basis for systematically addressing rightsizing issues. They agreed it is important that the Department of State and other agencies consider staffing issues based on a common set of criteria, for both existing embassies and future facilities. Officials in the Department of State’s Bureau of East Asian and Pacific Affairs and the Bureau of Near Eastern Affairs also agreed that the security, mission, cost, and option elements of the framework provide a logical basis for planning and making rightsizing decisions. They also believed that rightsizing analyses would be most effective if the framework were adopted as a part of the Department of State’s MPP process. Our rightsizing framework and its corresponding questions can be applied to embassies in developing countries and help decision makers collectively focus on security, mission, and cost trade-offs associated with staffing levels and rightsizing options. The rightsizing questions systematically provide embassy and agency decision makers a common set of criteria and a logical approach for coordinating and determining staffing levels at U.S. diplomatic posts. We recognize that the framework and its questions are a starting point and that modification of the questions may be considered in future planning, as appropriate. The Department of State’s MPP process has been strengthened and addresses some of the rightsizing questions in our framework. In particular, it better addresses embassy priorities, a key factor in our rightsizing framework. However, the mission planning process neither specifically addresses embassy rightsizing as a policy or critical management issue nor calls for assessments of related security and cost issues affecting all agencies operating at overseas posts. In keeping with the administration’s rightsizing initiative, we are recommending that the Director of OMB, in coordination with the Secretary of State, ensure that application of our framework be expanded as a basis for assessing staffing levels at embassies and consulates worldwide; and the Secretary of State adopt the framework as part of the embassy Mission Performance Planning process to ensure participation of all agencies at posts and the use of comparable criteria to address security, mission, cost issues, and rightsizing options. OMB and The Department of State provided written comments on a draft of this report (see apps. III and IV). OMB said that it agrees with our findings and recommendations and stated that our framework may serve as a valuable base for the development of a broader methodology that can be applied worldwide. OMB agreed that security, mission, and cost are key elements to consider in making rightsizing decisions. In addition, OMB noted that workload requirements, options for information technology, regionalization possibilities, and competitive sourcing opportunities should be considered in order to adapt the methodology to fit each post. The Department of State generally agreed with our recommendations and said that it welcomed GAO’s work on developing a rightsizing framework. The Department of State said that the rightsizing questions provide a good foundation for it to proceed in working with OMB and other agencies to improve the process for determining overseas staffing levels. The Department of State noted that some elements of the framework are already being undertaken and that it plans to incorporate additional elements of our rightsizing questions into its future planning processes, including the MPP. Department of State comments are reprinted in appendix IV. The Department of State also provided technical comments, which we have incorporated into the report where appropriate. To determine the extent to which our framework’s questions are applicable in developing regions, we visited three West African embassies—Dakar, Senegal; Banjul, The Gambia; and Nouakchott, Mauritania. At all posts, we spoke with regional security officers, in addition to ambassadors and other post officials, regarding the security status of their embassies and related security concerns. At all locations, we reviewed the applicability of the mission priorities and requirements section of the framework by asking the ambassadors, deputy chiefs of mission, administrative officers, consular officers, and general services officers to answer key questions in that section. To assess the usefulness of the cost section, we spoke with the same officers, in addition to Embassy Dakar’s financial management officer who provides regional support to both Banjul and Nouakchott. We also discussed with key officials whether opportunities exist to exercise certain rightsizing options such as competitively sourcing post goods and services or streamlining embassy functions that are commercial in nature. In addition, we interviewed Bureau of African Affairs executive officers, officials in the Bureau of Diplomatic Security in Washington, D.C., and the heads of key agencies operating in each country. Specifically, in Dakar we interviewed the Director and Deputy Director of the U.S. Agency for International Development (USAID) and the U.S. Treasury representative. In Banjul and Nouakchott, we interviewed the Directors of Peace Corps. We also met with officials in the executive offices of the Department of State’s Bureau of East Asian and Pacific Affairs and the Bureau of Near Eastern Affairs to determine the applicability of the framework in those regions. We conducted our work from October 2002 through January 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested members of Congress. We are also sending copies of this report to the Director of OMB and the Secretary of State. We also will make copies available to others upon request. In addition, the report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128 or John Brummet on (202) 512-5260. In addition to the persons named above, Janey Cohen, Lynn Moore, Ann M. Ulrich, and Joseph Zamoyta made key contributions to this report. This appendix provides detailed information on the responses to the rightsizing questions in our framework at the embassies in Dakar, Senegal; Banjul, The Gambia; and Nouakchott, Mauritania. Specific rightsizing issues, actions, and options for consideration are highlighted. Prior to the 1998 embassy bombings in East Africa, U.S. diplomatic facilities in Dakar had serious physical security vulnerabilities, including insufficient setbacks at most office buildings, including the chancery. Since 1998, many steps have been taken to ensure better security throughout the post. Important steps included (1) the relocation of the U.S. Agency for International Development (USAID) to a more secure location, (2) host-country cooperation for embassy-only traffic on the four streets surrounding the embassy’s main building, (3) the renovation and expansion of a more secure “waiting facility” for the consular affairs section, and (4) an increase in surveillance and detection units for the entire compound and employee residences. Although security at the Dakar post is now characterized as “good” for the current number of personnel, embassy officials cautioned that actions by Senegalese authorities to close off streets adjacent to the embassy are temporary measures that could be reversed at any time. In addition, the office space in the chancery can only accommodate a slight increase in personnel. Officials said that adding personnel to the post would aggravate certain security concerns. Embassy Dakar increasingly has more regional responsibilities and there are significant pressures to assign more personnel to Dakar—a situation that has been exacerbated as a result of the recently ordered departure status at the U.S. embassy in Abidjan, Cote d’Ivoire. The Dakar post now has about 90 American direct-hire personnel and 350 local hires. Staff projections over the next two fiscal years indicate an increase in staffing at the embassy for additional agencies, such as the Centers for Disease Control and Prevention and the Departments of Agriculture and Homeland Security, and the possible transfer of Foreign Commercial Service employees from the embassy in Abidjan. In addition, the Dakar consular section will be increasing its consular officers for visa purposes from two to four and may need additional staff in the future. As a result of increasing regional responsibilities and more personnel, Embassy Dakar may require additional Department of State support personnel as well. In spite of Dakar’s increasing regional role and responsibilities, the post has difficulty attracting and retaining experienced foreign service officers. Embassy officials indicated that senior foreign service officers perceive the post as having a relatively high cost of living, a low pay differential, and no available consumables. Hence, many key positions are filled with inexperienced junior staff, placing constraints on some offices in carrying out their mission. Comprehensive information was not available to identify the total annual operating costs for Embassy Dakar or for each agency at the post. Cost data were incomplete and fragmented. For example, embassy budget personnel estimated operating costs of at least $7.7 million, not including American employee salaries or allowances. Available Bureau of African Affairs budget data for the post estimated fiscal year 2003 operating costs of at least $6 million, including State’s public diplomacy costs, post administered costs, and International Cooperative Administrative Support Services expenses, but these costs did not reflect the salaries and benefits of Department of State and other U.S. agency American employees and the State bureau allotments, such as for diplomatic security. If all costs were included in a comprehensive budget, the total annual operating costs at the post would be significantly higher than both estimates. Post and Bureau officials agreed that fragmented and incomplete cost data make it difficult for them to systematically and collectively approach rightsizing initiatives and consider the relative cost-effectiveness of rightsizing options. Responses to the framework’s questions regarding rightsizing actions and other options at Embassy Dakar highlighted the impact of security conditions on anticipated staffing increases and the need to define and document the embassy’s growing regional responsibilities as part of the MPP process. They also highlighted potential opportunities for competitively sourcing certain embassy services to the private sector, as well as opportunities for streamlining warehouse operations. Embassy officials are reluctant to purchase commercial goods and services from the local economy due to quality and reliability concerns, and thus they employ a large number of direct-hire personnel to maintain and provide all post goods and services. If goods and services were competitively sourced to the local economy, the number of direct hires and costs could possibly be reduced. Opportunities also exist for streamlining Embassy Dakar’s warehousing operations, which could yield cost savings. The left box of figure 1 summarizes the main rightsizing issues that were raised at Embassy Dakar in response to the framework’s questions. The box on the right side identifies possible corresponding rightsizing actions and other options post decision makers could consider when collectively assessing their rightsizing issues. Officials at the post in Banjul characterized the compound as having good physical security and enough office space to accommodate additional staff. The post chancery compound is a “lock-and-leave” facility, as it does not have the 24-hour presence of U.S. government personnel. There are two leased vacant residential houses located directly behind the chancery building but separated from the chancery by a dividing wall. Embassy officials in Banjul have proposed buying the houses but explained that it is difficult to justify the cost because the purchase would put the embassy over its allotted number of homes (i.e., giving it nine homes for seven personnel). Some officials have suggested that the houses could be used for temporary duty personnel working at the post. During our work, visiting officials from the Immigration and Naturalization Service were using one of the houses to conduct political asylum visa interviews. Usually, however, the houses are vacant. According to the ambassador and the regional security officer, if the vacant houses were to be leased by nonembassy tenants, the chancery’s physical security would be seriously compromised. In addition, the regional security officer expressed concerns regarding the training and quality of the security contractor, particularly because the post does not have a Marine detachment to back up the security guards. Much of Embassy Banjul’s resources are devoted to supporting internal post operations instead of focusing on external goals, such as political reporting and public diplomacy. For example, more than 60 local hires carry out facilities maintenance and other post support functions while only 3 of the 7 American direct-hire personnel address the post’s 3 main program goals in The Gambia—namely, reinforcing democracy, increasing economic prosperity, and improving the population’s health. Since the consular officer is also responsible for political and economic reporting, the post recently requested one junior officer rotational position to help balance the duties in all three areas. Over the past 2 years the number of nonimmigrant visa applications in Banjul more than doubled—from 1,712 applications in March 2000 to 4,635 applications in September 2002—while the percentage of refused applications decreased from a high of 65 percent in September 2000 to a low of 38 percent in September 2002. Post officials said that the lack of a full-time consular officer may impede the post’s ability to focus on preventing fraudulent visa applications. The post has also requested one dual-purpose local employee to back up its growing public diplomacy and security assistance portfolios. Banjul’s primary post planning document, the MPP, did not include comprehensive data on the total cost of operations. The Bureau of African Affairs’ budget for the post estimated total costs of at least $1.7 million for fiscal year 2003. However, these estimates did not include American salaries and other expenses, such as State Bureau allotments. The left box of figure 2 summarizes the main rightsizing issues that were raised at Embassy Banjul in response to the framework’s questions. The box on the right identifies corresponding rightsizing actions and other options post decision makers could consider when collectively assessing their rightsizing issues. Embassy Nouakchott officials characterize the post compound as having good physical security, which has been upgraded since 1998. However, the chancery does not meet security setback requirements, and compound facilities have security deficiencies. Answering the framework’s questions regarding physical security did not indicate a need to change the number of staff based on existing security conditions at the embassy office buildings. However, embassy officials said that the questions helped highlight the need to consider the security risks and trade-offs associated with expected increases in the number of personnel at post. When asked specific questions regarding mission priorities and requirements, Embassy Nouakchott officials told us that the post has an adequate number of personnel to meet current mission requirements and priorities but that there are generally few bidders for positions at the post. The Ambassador and Deputy Chief of Mission emphasized that an increase or decrease of one employee greatly affects how the post accomplishes its mission—more so than at a larger post, such as Dakar. For example, the Regional Security Officer position is vacant and is being covered on a temporary duty basis by Dakar’s Assistant Regional Security Officer. Also, the post currently has no positions for political and public diplomacy officers. One officer may be assigned to multiple positions owing to limited demand for certain services. For example, the Consular Officer at Embassy Nouakchott is also responsible for the duties of a commercial/economic officer. However, the post hopes to add one full- time officer for political and human rights reporting, according to the post’s MPP. Operating costs for the Nouakchott post are not fully documented in the MPP or used to justify staffing levels. Embassy Nouakchott officials roughly estimated total operating costs of about $4 million for fiscal year 2003. The Bureau of African Affairs’ budget for the post estimated partial operating costs of only $2.1 million annually, but the estimate did not include American salaries, diplomatic security, and other costs. The left box of figure 3 summarizes the main rightsizing issues that were raised at Embassy Nouakchott in response to the framework’s questions. The box on the right side identifies corresponding rightsizing actions and other options post decision makers could consider when collectively assessing their rightsizing issues. Is existing space being optimally utilized? Have all practical options for improving the security of facilities been considered? Do issues involving facility security put the staff at an unacceptable level of risk or limit mission accomplishment? What is the capacity level of the host country police, military, and intelligence services? Do security vulnerabilities suggest the need to reduce or relocate staff? Do health conditions in the host country pose personal security concerns that limit the number of employees that should be Mission priorities and requirements What are the staffing levels and mission of each agency? How do agencies determine embassy staffing levels? Is there an adequate justification for the number of employees at each agency compared with the agency’s mission? Is there adequate justification for the number of direct hire personnel devoted to support and administrative operations? What are the priorities of the embassy? Does each agency’s mission reinforce embassy priorities? To what extent are mission priorities not being sufficiently addressed due to staffing limitations or other impediments? To what extent are workload requirements validated and prioritized and is the embassy able to balance them with core functions? Do the activities of any agencies overlap? Given embassy priorities and the staffing profile, are increases in the number of existing staff or additional agency representation To what extent is it necessary for each agency to maintain its current presence in country, given the scope of its responsibilities Could an agency’s mission be pursued in other ways? Does an agency have regional responsibilities or is its mission entirely focused on the host country? Cost of operations What is the embassy’s total annual operating cost? What are the operating costs for each agency at the embassy? To what extent are agencies considering the full cost of operations in making staffing decisions? To what extent are costs commensurate with overall embassy strategic importance, with agency programs, and with specific Consideration of rightsizing options What are the security, mission, and cost implications of relocating certain functions to the United States, regional centers, or to other locations, such as commercial space or host country counterpart agencies? To what extent could agency program and/or routine administrative functions (procurement, logistics, and financial management functions) be handled from a regional center or other locations? Do new technologies and transportation links offer greater opportunities for operational support from other locations? Do the host country and regional environments suggest there are options for doing business differently, that is, are there adequate transportation and communications links and a vibrant private sector? To what extent is it practical to purchase embassy services from the private sector? Does the ratio of support staff to program staff at the embassy suggest opportunities for streamlining? Can functions be reengineered to provide greater efficiencies and reduce requirements for personnel? Are there best practices of other bilateral embassies or private corporations that could be adapted by the U.S. embassy? To what extent are there U.S. or host country legal, policy, or procedural obstacles that may impact the feasibility of rightsizing options? We added this question based on the suggestion of officials at the Office of Management and Budget. The following are GAO’s comments on the Department of State’s letter dated February 25, 2003. 1. We did not set priorities for the elements in the framework that appear in this report. Moreover, we believe that decision makers need to consider security, mission, and cost collectively in order to weigh the trade-offs associated with staffing levels and rightsizing options. 2. We did not imply that there is a problem of exploding growth in overseas staffing levels that needs to be reined in. Our statement that there is a need for a systematic process to determine overseas staffing levels (i.e., rightsizing) was made on the basis that the elements of security, mission, cost, and other rightsizing options are not collectively addressed in a formal process to determine staffing levels at overseas posts. On page 1 of the report, we state that rightsizing may result in the addition, reduction, or change in the mix of staff. 3. We modified our report on page 7 to discuss the Overseas Staffing Model. 4. We modified our report on pages 6-7 to more accurately describe the National Security Decision Directive-38. 5. International Cooperative Administrative Support Services (ICASS) is only one component of a post’s total overseas costs and include the costs of common administrative support, such as motor pool operations, vehicle maintenance, travel services, mail and messenger services, building operations, information management, and other administrative services. However, this component does not cover all employee salaries and benefits, all housing, office furnishings and equipment, diplomatic security, representation, miscellaneous expenses, and other costs for all agencies operating at a post. Total costs associated with each post need to be considered when overseas staffing decisions are made.
|
Since the mid-1990s, GAO has highlighted the need for the Department of State and other agencies to establish a systematic process for determining their overseas staffing levels. To support this long-standing need and in support of the President's Management Agenda, GAO developed a framework for assessing overseas workforce size and identified options for rightsizing. Because the framework was largely based on work at the U.S. embassy in Paris, GAO was asked to determine whether the rightsizing framework is applicable at U.S. embassies in developing countries. To accomplish this objective, we visited three U.S. embassies in West Africa--a medium-sized post in Dakar, Senegal; and two small embassies in Banjul, The Gambia; and Nouakchott, Mauritania--and applied the framework and its corresponding questions there. GAO's rightsizing framework can be applied at U.S. embassies in developing countries. Officials from the Bureau of African Affairs, and U.S. embassy officials in Dakar, Senegal; Banjul, The Gambia; and Nouakchott, Mauritania, said that the framework's questions highlighted specific issues at each post that should be considered in determining staffing levels. Officials in other State bureaus also believed that the security, mission, cost, and option components of the framework provided a logical basis for planning and making rightsizing decisions. At each of the posts GAO visited, application of the framework and corresponding questions generally highlighted (1) physical and technical security deficiencies that needed to be weighed against proposed staff increases; (2) mission priorities and requirements that are not fully documented or justified in the posts' Mission Performance Plans; (3) cost of operations data that were unavailable, incomplete, or fragmented across funding sources; and (4) rightsizing actions and other options that post managers should consider for adjusting the number of personnel.
|
The kidneys are the body’s filtration system that removes waste and extra fluid from the blood. Further, the kidneys maintain electrolyte stability, help to control blood pressure, and produce hormones to keep the body and blood healthy. Kidney disease occurs when the kidneys become damaged and can no longer filter blood like they should, often due to diabetes or high blood pressure—the most common causes of kidney disease. For most people, kidney disease unfolds slowly over many years and often has no signs or symptoms until the disease is very advanced, such that less than 15 percent of people in the late stages of kidney disease are aware of their disease. However, early detection is possible through blood and urine tests, which can delay or prevent the progression of kidney disease. Treatment may include, for example, taking medicines to manage high blood pressure to protect the kidneys. However, even with treatment, kidney disease usually cannot be cured. Instead, it may get worse over time leading to kidney failure, or ESRD, which may lead to death without dialysis treatment or a kidney transplant. While anyone can develop kidney disease, regardless of age or race, African Americans, Hispanics, and Native Americans are at high risk for ESRD, due, in part, to high rates of diabetes and high blood pressure in these communities. Medicare provides health coverage for most individuals with ESRD, regardless of their age. Medicare spending on treatment for individuals with ESRD has almost doubled in recent years— from $16.2 billion in 2003 to $30.9 billion in 2013—as the number of Medicare beneficiaries with this condition and annual Medicare spending per person have increased. NIH—which had a total budget of $30 billion in fiscal year 2015—is comprised of the Office of the Director, and 27 institutes and centers (IC) that focus on specific diseases, particular organs, or stages in life, such as childhood or old age. Twenty-four of the 27 ICs receive specific appropriations to support, plan, and manage their research programs. Within NIDDK, the Division of Kidney, Urologic, and Hematologic Diseases researches diseases of the kidney and also focuses on the fields of urology and hematology. Specifically, the division’s areas of kidney research include chronic kidney disease, ESRD, cystic kidney disease, acute kidney injury, and kidney donation. NIDDK’s budget in fiscal year 2015 was $1.75 billion, of which $430 million (25 percent) was allocated to the division. NIDDK and the other ICs accomplish their missions primarily through extramural research conducted by scientists and research personnel working at universities, medical schools, and other research institutions. Most extramural research funding is provided for investigator-initiated research projects for which researchers submit applications in response to broad funding opportunity announcements that span the breadth of NIH’s mission. In addition to the broad investigator research announcements, ICs issue more narrowly scoped solicitations for research targeting specific areas. All extramural research project applications follow NIH’s process of peer review, which was established by law, and includes two sequential levels of review. The first level involves panels of non-governmental experts to assess the scientific merit of the proposed science, and the second level involves panels of non-governmental experts and leaders of non-science fields, including patient advocates, that, in addition to scientific merit, consider the IC’s mission and strategic plan goals, public health needs, scientific opportunities, and the balance of the IC’s research across its various divisions and centers. In January 2007, Congress directed NIH to establish an electronic system to categorize the research grants and activities of the Office of the Director and the ICs. In response, NIH implemented RCDC in February 2008, which reports on the amount of NIH funding in a given fiscal year associated with one or more categories of diseases, conditions, or research areas. RCDC reports publicly on 265 of these categories. To assign an NIH project to the appropriate categories, RCDC uses a computer-based text-mining tool that recognizes words and phrases in project descriptions. Projects may fall into one or more RCDC categories. For example, a study on how diabetes leads to kidney disease would be listed in the “diabetes” and “kidney disease” categories. The system includes reporting tools that can be used to generate publically available, web-based reports on total funding amounts for the research projects related to each RCDC category. NIH funding for biomedical research related to kidney disease totaled approximately $564 million for 1,493 projects in fiscal year 2015—an increase of 2.7 percent from fiscal year 2014. NIDDK provided the majority (60 percent) of this funding; other ICs provided the remaining 40 percent of funding. In NIDDK, the average research project award was about $345,000, and ranged from approximately $27,120 for the smallest to $28.5 million for the largest. (See fig. 1.) The kidney disease projects funded by each IC reflect their different missions. As the lead NIH institute for kidney disease, NIDDK funds a broad kidney disease research portfolio, while the other ICs fund kidney disease research in more specific areas that relate to their missions. For example, one component of the National Heart, Lung, and Blood Institute’s mission is heart health, and as noted earlier, heart disease can be a cause of kidney disease. Therefore, that institute funds research that examines how kidney disease impacts cardiovascular health. Similarly, the National Institute of Allergy and Infectious Diseases fulfills its mission—to study immunology and infectious diseases—through kidney disease research that primarily addresses how a patient’s immune system responds to a kidney transplant, as well as the negative impacts of chronic-autoimmune diseases on long-term kidney health. Although NIH is the primary federal agency involved in biomedical research on kidney disease, there are other federal agencies that conduct and fund research in this area. Appendix II describes these agencies’ kidney disease research efforts. To provide context for the level of NIH research funding for kidney disease, we also analyzed NIH funding levels for other leading diseases and conditions in the United States—those that had high mortality, were among the most prevalent chronic conditions, or both. The RCDC categories corresponding to these leading diseases and conditions, which are shown in table 1, are neither mutually exclusive nor exhaustive. Categories do not exist for all diseases and conditions, and NIH officials said that a project may be included in, on average, six RCDC categories. For example, a $1 million project on “depression in older men with diabetes” could be placed into each of four categories: (1) depression ($1 million), (2) aging ($1 million), (3) mental health ($1 million), and (4) diabetes ($1 million). Therefore, while RCDC produces a complete list of the funded projects included within a category, it is not designed to produce non-overlapping assignment of projects or fractions of projects to categories. In fiscal year 2015, NIH research funding varied across the categories corresponding to the diseases and conditions in our analysis, from $8 million for fibromyalgia to nearly $5.4 billion for cancer. (See table 1.) The variation in research funding across the RCDC categories in table 1 reflects a range of factors, including differences in each IC’s mission, congressional appropriations, and research priorities. An IC’s appropriation sets the amount of funding available for the given fiscal year. Furthermore, appropriations may include mandated spending for a specific disease, as is the case, for example, for type I diabetes. Research priorities can affect the amount of funding devoted to the study of a particular disease. We previously reported that ICs considered a variety of factors when setting research priorities and NIH officials confirmed that this is still the case. The factors ICs consider when setting research priorities include scientific needs and opportunities—identifying those research areas that have advanced such that additional research funding could yield a breakthrough; gaps in funded research and investment, such as diseases that may attract limited private sector research funding; burden of disease—the impact of a health problem—on a population, as measured by indicators such as prevalence, mortality, and impact on quality of life; and public health needs, such as an emerging public health threat that needs to be addressed, like the Zika virus. NIDDK works with the broader kidney care community to develop its kidney disease research priorities by using a web-based forum, hosting a variety of meetings for kidney disease stakeholders, and by assessing its research portfolio. NIDDK considers the community’s input in the context of the institute’s ongoing work and its knowledge of the current state of kidney disease research to develop funding announcements that target high-priority research areas that are not being adequately addressed. According to NIDDK officials, NIDDK’s process for obtaining input from the kidney care and scientific communities, and developing research priorities is iterative by design to help ensure that the institute’s priorities evolve to reflect the latest research developments and needs of the communities. (See fig. 2.) NIDDK established and maintains the Kidney Research National Dialogue (KRND)—an open, interactive, web-based forum—to obtain input from the kidney care research community. The KRND began in 2010 by allowing participants to submit, comment on, and prioritize potential kidney disease research objectives for the kidney care and research community and to be supported by NIDDK through workshops and initiatives. (See app. III for more information on the KRND.) With the help of established researchers in the kidney disease field, NIDDK staff reviewed and distilled the KRND submissions into kidney disease research priorities that address a variety of topic areas such as improving therapies for chronic kidney disease, promoting human studies to better understand kidney function, and advancing dialysis technology and research. In addition to the KRND, NIDDK officials stated that the institute obtains input for its research priorities by meeting with its advisory council, federal agencies involved in kidney disease research, scientific researchers, and private kidney care organizations. NIDDK advisory council: According to NIDDK officials, an NIDDK advisory council member with expertise in kidney disease research annually presents NIDDK officials with what he or she views, based on clinical or research experience, as the most pressing kidney disease research priorities. In addition, council members help shape the research priorities by peer reviewing research proposals. Federal agencies: Through the KICC, representatives of the federal agencies involved in kidney disease research meet twice per year to present and discuss information on their agencies’ respective kidney disease-related programs and activities. The KICC agencies also discuss a specific kidney disease topic at each meeting, such as improving access to kidney transplantation, chronic kidney disease awareness, and determining gaps in kidney disease research. Scientific researchers: NIDDK’s Division of Kidney, Urologic, and Hematologic Diseases hosts four to six scientific kidney disease meetings every year, according to NIDDK officials. These meetings are largely attended by scientific researchers and also by private kidney care organizations, pharmaceutical industry members, and officials from other federal agencies. Past meeting topics have focused on a variety of areas as they relate to kidney disease, including clinical trials, health information technology, and precision medicine. Kidney care organizations: Throughout the year, NIDDK meets with kidney care organizations that represent different factions of the kidney care community (e.g., patient and provider organizations, and professional societies) to discuss their kidney disease research priorities. For example, of the six private kidney care organizations we interviewed, five reported that they meet with NIDDK individually or in the company of other private organizations on at least an annual basis. In addition to the stakeholder meetings, NIDDK officials told us that they annually assess the institute’s research portfolio—including investigator research projects—to identify research gaps. NIDDK then uses the input obtained through the portfolio review, the KRND, and stakeholder meetings to develop targeted funding announcements for NIDDK research initiatives. For example, for one NIDDK research initiative—the Kidney Precision Medicine Project—NIDDK officials expect to issue $33 million in grant funding between fiscal years 2017 and 2021 to increase research related to acquiring and studying human kidney tissues. The lack of research on human kidney samples was highlighted in the KRND, suggested by an advisory council member at an advisory council meeting, and discussed at two scientific research meetings. Representatives from six private kidney care groups we interviewed generally agreed with NIDDK’s kidney disease research priorities as published in the KRND; however, all of the organizations’ representatives identified kidney disease topic areas that they said warranted increased emphasis by NIDDK. For instance, representatives from four of the six groups we interviewed expressed concern over a lack of kidney disease awareness in the general public. To mitigate this, representatives from one group recommended additional research on identifying at-risk populations that would benefit from kidney disease screenings. In addition, some of representatives noted that additional outreach and research is needed to reduce the disparities associated with ESRD. Specifically, rates of ESRD are 3.4 times higher in African Americans and 1.5 times higher in Hispanics than in whites. NIDDK officials agreed that improving kidney disease awareness and reducing kidney disease disparities were important issues, and pointed out a variety of ongoing NIDDK programs related to these topics. For example, NIDDK established the National Kidney Disease Education Program to raise awareness and reduce disparities through a variety of efforts directed at communities at high risk for kidney disease, patients, and professionals working in the primary care setting. NIDDK officials also said that NIDDK’s Kidney Sundays program is intended to address both kidney disease awareness and disparities. Specifically, Kidney Sundays provides kidney disease information to African Americans by raising awareness within churches about the risks of kidney disease and the importance of being tested for kidney disease. According to NIDDK officials, 134 churches across 25 states participated in Kidney Sundays in 2016. We provided a draft of this product to the Department of Health and Human Services. The department provided us with technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to relevant congressional committees and other interested parties. In addition, this report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the leading diseases and conditions for our analysis of National Institutes of Health (NIH) research funding, we identified diseases and conditions that had high mortality, were chronic conditions with high prevalence, or both. To inform our selection, we examined results from national surveys conducted by the National Center for Health Statistics, within the Centers for Disease Control and Prevention (CDC), interviewed CDC officials, and examined related GAO work. To identify diseases and conditions with high mortality, we used the CDC’s National Vital Statistics Report on mortality list for 2014, the most current data available. These data are based on nationwide, standardized reporting using the International Classification of Diseases, Tenth Revision (ICD- 10) system, which provides the rules used to code and classify primary and contributing causes of death, as well as the selection of the underlying cause of death on death certificates. When mortality data were reported for disease subcategories, we included the disease subcategory in our analysis when it exceeded 42,773 deaths, which was the number for the 10th leading cause of death (suicide). For instance, three subcategories of cancer each caused at least 42,773 deaths in 2014, as shown in table 2. To identify a list of prevalent chronic conditions, we first identified a peer- reviewed paper authored by researchers at CDC (among other institutions) that identifies a list of 20 chronic conditions that are likely to be among the most prevalent. This paper—which we refer to as “the OASH list,” because the work was led by a working group within the Department of Health and Human Services’ Office of the Assistant Secretary of Health (OASH)—was particularly well-suited for our work due to several factors. Among these are that it includes a precise definition of chronic illnesses: “conditions that last a year or more and require ongoing medical attention and/or limit activities of daily living (such as physical medical conditions, behavioral health problems, and developmental disabilities).” In addition, the authors applied a clear methodology in identifying their list of chronic conditions from three sources: (1) the Centers for Medicare & Medicaid Services’ Chronic Condition Data Warehouse; (2) the list of “Priority Conditions” identified by the Agency for Healthcare Research and Quality’s Effective Health Care Program; and (3) the Robert Wood Johnson Foundation chart book Chronic Care: Making the Case for Ongoing Care. Though our list of chronic conditions was primarily based on the OASH list, we compared it with CDC’s leading causes of death list and the list of most prevalent chronic conditions that CDC provided GAO in 2013. If a condition was listed on the leading cause of death list or the 2013 list from CDC, but was not present on the OASH list, we added it to our total list of conditions to ensure that we did not omit any key chronic conditions. This process resulted in a list of 35 leading chronic diseases and conditions for which we requested prevalence estimates from CDC. We requested the most recently available prevalence estimates for the 35 conditions from one or both of two CDC surveys: the National Health Interview Survey (NHIS) or the National Health and Nutrition Examination Survey (NHANES). NHIS contains data collected through personal household interviews on a broad range of health topics. NHANES is designed to assess the health and nutritional status of adults and children in the United States, and combines interviews with clinical information—a physical examination and laboratory tests. We used NHANES as the basis for the prevalence estimate for a given condition, because NHANES includes clinical information, which would capture diagnosed and undiagnosed conditions. When NHANES data for a condition were limited to the interview (self-reported) or not available for the 2013-2014 time period, we relied on the data from NHIS. We obtained prevalence estimates for adults in the United States for 25 of the 35 conditions. We included in our analysis the 10 chronic conditions with the highest estimated prevalence rate. (See table 3.) For the diseases and conditions included in our analysis, we identified the corresponding categories from NIH’s Research, Condition, and Disease Categorization (RCDC) system, which categorizes NIH research projects (and associated funding). First, we reviewed the ICD-10 codes associated with each of these diseases and conditions. We then identified the RCDC categories that corresponded to the list of diseases and conditions. We confirmed the appropriateness of these matches with NIH officials, and clarified any differences in definition between the diseases and conditions, and the associated RCDC categories. Table 4 contains a crosswalk between these categories and the disease and conditions in our analysis. To assess the reliability of the data used in our analysis, we interviewed knowledgeable NIH and CDC officials, and reviewed documentation about the data sources and methods for collecting the data. We determined that the data were sufficiently reliable for the purposes of our reporting objectives. The following is a summary of biomedical research on kidney disease conducted by federal agencies outside the National Institutes of Health (NIH): specifically, agencies that are part of the Kidney Interagency Coordinating Committee (KICC), as well as the Patient-Centered Outcomes Research Institute (PCORI). For the purposes of this report, we defined biomedical research as consisting of (1) basic research, which involves laboratory studies that provide the foundation for clinical research; (2) clinical research, which includes patient-oriented research, epidemiologic and behavioral studies, and outcomes and health services research; and (3) translational research, which can involve enhancing the adoption of clinical best practices in the community. Where available, we also provide information on funding associated with these kidney disease research activities. Department of Defense (DOD). DOD supports biomedical research on kidney disease primarily through the Peer Reviewed Medical Research Program, which funds research of high scientific merit and direct relevance to military health, across a wide array of topic areas directed by Congress. Two of the topic areas in fiscal year 2015 were directly related to kidney disease: focal segmental glomerulosclerosis—a disease in which scar tissue develops on the parts of the kidneys that filter waste out of the blood; and polycystic kidney disease—an inherited disorder in which clusters of cysts develop primarily within the kidneys. According to DOD, the agency funded eight research projects related to kidney disease within these two topic areas in the amount of $7.1 million in fiscal year 2015. In addition, the agency funded two fiscal year 2015 kidney disease- related research projects under the topic areas of cardiovascular health and lupus in the amount of $2.8 million. Department of Health and Human Services. Agency for Healthcare Research and Quality (AHRQ). As part of its mission to improve the safety and quality of health care, AHRQ funds extramural research grants to study chronic kidney disease in areas such as patient safety and disease management, and assessment in patients with multiple chronic conditions. According to ARHQ officials, in fiscal year 2015, ARHQ provided new or ongoing funding to four kidney disease research projects in the amount of $1.3 million. Centers for Disease Control and Prevention (CDC). CDC conducts numerous epidemiologic studies to determine risk factors for the incidence and progression of chronic kidney disease, and to research the burden of the disease in both the general and specific populations. In addition, CDC’s Chronic Kidney Disease Initiative includes a website that provides information on the disease’s burden and risk factors. Lastly, CDC is also collaborating with NIDDK to investigate using new kidney disease markers to diagnose early kidney function decline. According to CDC officials, the agency obligated approximately $2 million in fiscal year 2015 for kidney disease activities, including biomedical research. Food and Drug Administration (FDA). FDA is currently in year three of a 5-year renewable grant to the Kidney Health Initiative (KHI). Founded in 2012, the KHI is a public-private partnership between FDA and the American Society of Nephrology. Through a collaboration with over 75 member organizations—such as patient organizations, pharmaceutical and biotechnology companies, dialysis providers, and government agencies—the KHI aims to (1) advance scientific understanding of the kidney health and patient safety implications of new and existing medical products, and to (2) foster development of therapies for diseases that affect the kidneys. For instance, one of KHI’s current research projects seeks to clarify clinical trial endpoints for dialysis vascular access trials. Though FDA does not direct its grant to specific KHI projects, FDA representatives participate on KHI’s board of directors and can thereby influence research project funding decisions. According to FDA officials, in fiscal year 2015, FDA provided KHI with $500,000. Department of Veterans Affairs (VA). In fiscal year 2015, VA supported 107 kidney disease intramural research projects. For example, a past VA study found that patients who took part in a screening and education program for kidney disease before being diagnosed with the disease were better prepared to live with the disease and had significantly lower death rates than those who had not taken part in the program. In addition, VA recently issued guidelines (jointly developed with DOD), for the management of chronic kidney disease. According to VA officials, the total VA biomedical research budget in fiscal year 2015 was $589 million, of which about $20.9 million was for kidney disease research. PCORI. PCORI’s Board of Governors approved three extramural research projects related to kidney disease in fiscal year 2015 for funding totaling $14 million. The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) established the Kidney Research National Dialogue (KRND) to help inform its research priorities by obtaining input from the kidney care research community. The KRND consists of three phases. Phase 1 (2010—2013): KRND was an interactive, web-based forum that allowed participants to submit, comment on, and prioritize potential kidney disease research priorities. Participants were asked to categorize their ideas into 12 topic areas, such as chronic kidney disease, acute kidney injury, and end stage renal disease. According to NIDDK’s website, the KRND had over 1,600 participants from more than 30 countries. Phase 2 of the KRND (2011—2014): NIDDK invited research experts to participate in one of 12 topic-specific working groups. Each group was charged with fully assessing the postings from phase 1 of the KRND, identifying research gaps, and developing a potential strategy for moving the field forward. Each working group published their priority recommendations in the Clinical Journal of American Society of Nephrology. The groups’ papers covered the following 12 topics: 1. Overview of the KRND 3. Acute Kidney Injury 4. Defining Kidney Biology to Understand Renal Disease 6. Improving Chronic Kidney Disease Therapies and Care 7. Propagating the Nephrology Research Workforce 8. Pediatric Kidney Disease: Tracking Onset and Improving Clinical 9. Glomerular Disease 10. Filling the Holes in Cystic Kidney Disease Research 11. Translational Research to Improve Chronic Kidney Disease Outcomes 12. The KRND: Gearing Up to Move Forward Phase 3 (2014—present): NIDDK officials continue to seek comments on the priorities articulated in the 12 topic papers through comments on PubMed Commons (an NIH-funded open, web-based platform). NIDDK officials told us that, to date, they have not received any comments. In addition to the contact named above, Will Black (Assistant Director), Kristeen McLain (Analyst-in-Charge), Jesse Elrod, and Alison Smith made key contributions to this report. Also contributing were Hayden Huang, Drew Long, Yesook Merrill, Vikki Porter, and Emily Wilson.
|
An estimated 17 percent of U.S. adults have chronic kidney disease—the most common form of kidney disease—a condition in which the kidneys are damaged and cannot filter blood sufficiently, causing waste from the blood to remain in the body. Kidney disease patients may progress to ESRD, a condition of kidney failure, which can cause death without dialysis or kidney transplant. In 2013, the Medicare program—which pays for ESRD treatment—spent $30.9 billion to treat approximately 530,000 patients. Given the high cost of kidney disease in terms of health consequences and federal spending, GAO was asked to examine how the federal government funds and prioritizes kidney disease research. This report describes (1) the level of NIH funding for biomedical research on kidney disease, and for other leading diseases and conditions; and (2) how NIDDK sets priorities for kidney disease research. To describe NIH funding for research on kidney disease and other diseases and conditions, GAO selected leading diseases and conditions (based on mortality and prevalence) and analyzed their levels of research funding based on NIH data for fiscal year 2015. To describe how NIDDK sets priorities for kidney disease research, GAO reviewed documents—including those on research portfolios and strategic planning—from NIDDK, NIH, and other relevant federal agencies. Also, GAO interviewed agency officials and private kidney care groups representing a broad range of perspectives. The National Institutes of Health (NIH), within the Department of Health and Human Services, is the primary federal agency that conducts biomedical research on kidney disease, as well as various other diseases and conditions. NIH's budget—$30 billion in fiscal year 2015—mostly funds extramural research that supports research personnel working at universities, medical schools, and other institutions. The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)—one of NIH's 27 institutes and centers (IC)—has primary responsibility for kidney disease research. NIH funding for biomedical research on kidney disease in fiscal year 2015 was approximately $564 million—an increase of 2.7 percent from fiscal year 2014. NIDDK provided the majority (60 percent) of this funding, supporting a broad range of projects, such as chronic kidney disease, end-stage renal disease (ESRD) treatment, and kidney donation. GAO also reviewed NIH research funding levels for other diseases and conditions in the United States—those that are associated with high mortality or are among the most prevalent chronic conditions. GAO found that funding for fiscal year 2015 varied widely among these diseases and conditions—for example, from $28 million for emphysema to nearly $5.4 billion for cancer. This variation in funding reflects a range of factors, including each IC's mission, budget, and research priorities. NIDDK obtains input from the broader kidney care community to develop its research priorities. To develop funding announcements that target high-priority research areas, NIDDK considers the kidney care community's input in the context of its ongoing work and its knowledge of the current state of kidney disease research. NIDDK's process for obtaining input from the kidney care community is iterative by design to help ensure that the institute's research priorities evolve to reflect the latest research developments and needs of the kidney care community. Representatives from six private kidney care groups GAO interviewed generally agreed with NIDDK's kidney disease research priorities; however, some of the groups' members identified kidney disease topic areas they believe warrant more attention from NIDDK, such as a lack of kidney disease awareness in the general public. NIDDK agreed with this and other topics raised by the groups, and pointed out a variety of ongoing NIDDK programs that address these topics. The Department of Health and Human Services provided technical comments, which GAO incorporated as appropriate.
|
Federal government real estate that is no longer needed is not automatically sold. Rather, the Federal Property and Administrative Services Act of 1949 (the Property Act) requires a screening process in which the appropriate government officials explore transferring the property with or without payment to another government or nonprofit agency. For example, DOD first screens excess property for possible use by other DOD organizations and then by other federal agencies. If no federal agency has a need for the excess property, it is declared surplus to the federal government and generally is made available to private nonprofit and state and local agencies. First, as stipulated by the Stewart B. McKinney Homeless Assistance Act, the surplus property is made available to providers of services to the homeless. If none of these providers opt to take the property, it is offered to public benefit agencies such as state or local jurisdictions or qualifying nonprofit organizations for other authorized purposes. Any property that remains is available for negotiated sale to a state or local government. Finally, if no state or local government wishes to acquire the property, it is offered for sale to the general public. Title XXIX of the National Defense Authorization Act for fiscal year 1994 contained a number of amendments to the Property Act and the Base Realignment and Closure (BRAC) acts of 1988 and 1990. These amendments enabled the state and local governments and the general public to receive government property at no cost if it is used for economic development. DOD’s interim regulations also grant these communities a 60-percent share of net proceeds from the sale or lease of properties transferred under this authority, unless the secretary of the military department concerned determines that a different division of net profits is appropriate. The information contained in this report reflects the May 1, 1994, status of property disposal plans at 37 of the 120 installations closed by the 1988 and 1990 legislation (see fig. 1). A former Secretary of Defense stated that the number of closures in 1995 could exceed those of the previous years because closures have not kept pace with staff and force structure reductions. The Army is credited for almost all of the $69.4 million in property sales revenue and for $5.3 million of the $22.2 million in pending sales. Its largest sale, $38.5 million, was to the state of Hawaii for land at the Kapalama Military Reservation. The Army has also sold 761 family housing units in various locations for an average of about $40,000 per unit. The only non-Army sale was for about 400 family housing units—detached single family houses and apartments—that the Navy sold for an average of about $420 per unit (total of $168,000) to the Beeville Redevelopment Corporation in Beeville, Texas. The Air Force has about $16.9 million in pending sales. About $19 million of the $92 million in sales and pending sales was merely a transfer of funds from one federal agency to another—not a revenue gain for the federal government. Planned sales of 9,400 acres of property will result in additional revenue once final property disposition decisions are made and cleanup or remediation is in place. As we reported earlier, DOD has been reducing the estimates for land sales revenue as it receives better information on property values and sales data. The primary reason for the low property sales revenues is that 88 percent of the property at the bases we reviewed will be retained by DOD or transferred at no cost to other federal agencies and state and local jurisdictions. Current plans call for the sale of only about 5 percent of the land. The remaining 7 percent remains in an undetermined status. Appendix I shows the planned disposition of property. Over 110,000 of the approximately 192,000 acres of the total land available at the bases we reviewed are being retained by DOD or transferred to other federal agencies. Nearly half of this land is contaminated with unexploded ordnance—about 7,200 acres at Fort Ord and 47,500 acres at Jefferson Proving Grounds. The federal government’s retention of the contaminated land could significantly reduce cleanup costs since the land will remain undeveloped. DOD will retain 26,000 of the 110,000 acres. Nearly 10,000 acres at 14 bases will be retained for use by Reserve and National Guard units. DOD will also retain over 1,000 acres of military housing at 6 bases for use by personnel assigned to nearby bases. The largest acreage planned for retention by DOD will be 13,000 acres at Fort Wingate for the Ballistic Missile Defense Office for missile testing in conjunction with the White Sands Missile Range in New Mexico. As shown in figure 2, 84,000 of the 110,000 acres will be transferred to other federal agencies, including the Bureau of Land Management, the Fish and Wildlife Service, and the Bureau of Prisons. About 79,000 acres of mostly undeveloped property, wetlands, and natural habitats will be transferred to the Fish and Wildlife Service or the Bureau of Land Management at eight bases. This includes the land contaminated with unexploded ordinance at Jefferson Proving Ground and Fort Ord. The Bureau of Prisons will receive 1,800 acres at 3 bases for federal prisons. Other federal agencies receiving properties are the National Park Service (1,480 acres at the Presidio of San Francisco) and the National Aeronautics and Space Administration (1,440 acres at Moffett Naval Air Station). About 60,000 acres of the 192,000 acres likely will be transferred at no cost to state and local jurisdictions and nonprofit organizations. Most of the property, about 40,000 acres, will be used for public benefit purposes such as airports, parks and recreation, education, and homeless assistance. (Fig. 3 shows the breakdown of these no-cost transfers). An additional 19,500 acres will be transferred for economic development purposes. Of the 24 bases with airfields, 16 will be transferred to local communities for public use, 2 will be retained by federal agencies, 5 will be used for nonaviation purposes, and the reuse of 1 is yet to be determined. Including property to be used to help finance the operation of the airfields, over 30,000 acres will be transferred for public aviation uses. The communities are hoping to convert military airfields into civilian airports. In two instances, Bergstrom Air Force Base and Myrtle Beach Air Force Base, the airfield transfers will meet Federal Aviation Administration-identified needs for primary commercial airports. The Federal Aviation Administration has categorized the potential use of most of the rest of the airfields as general aviation airports, with those in metropolitan areas also potentially serving as reliever airports that can provide alternative landing sites when major airports are congested. Local redevelopment authorities report difficulties in attracting aviation-related tenants, and they are competing with existing general aviation airports as well as with each other for tenants. General aviation traffic has declined nationwide by about 32 percent in the past 14 years. Bases in rural areas—e.g., Wurtsmith, Loring, and Eaker Air Force Bases—have a particularly difficult time attracting commercial tenants. We analyzed the grants to communities at the 16 bases where attempts to reuse military airfields as civilian airports was the centerpiece of the reuse plan (see fig 4). The results of our analysis show that while communities at these 16 bases are developing and implementing reuse plans for 28 percent of the total acres of the 37 bases, they have received 78 percent of the $66 million in planning and infrastructure grants. At 15 of the 37 bases, communities are requesting about 6,800 acres for parks and recreation. The largest transfer will be at Fort Ord about 2,600 acres, including beaches and sand dunes. At Mather Air Force Base, about 1,500 acres will be transferred to the county for park and recreation use and at Fort Benjamin Harrison, a 1,100-acre parcel will become a state park. As of May 1994, about 2,000 acres at 16 bases were planned for transfer through the Department of Education to qualified organizations for educational purposes, with the largest conveyances at Fort Ord, Williams Air Force Base, and Lowry Air Force Base. At the time of our review at Fort Ord, 2,700 acres were requested for an education, science, and technology center focusing on environmental sciences. It will include a new California State University campus, a University of California research and technology center, and a language training institute emphasizing Pacific Rim languages. Local officials recently changed their request for Fort Ord so that they can qualify for an economic development transfer. An economic development conveyance would avoid the restrictions required for educational transfers that the donee continuously uses the property for educational purposes for up to 30 years. At Williams Air Force Base, over 600 acres, including many of the core base facilities, have been requested for an education, research, and training consortium focusing on aviation-related training and research and involving nearby Arizona State University, Maricopa Community College, and 21 other educational institutions. Plans at Lowry Air Force Base call for conversion of an Air Force training center into an educational consortium that will emphasize training new and displaced workers and involve the local community college and various other schools. As of May 1994, 17 of the 37 bases were planning to convey property at no cost to homeless assistance organizations (see fig. 5 for locations). Several other bases will likely do so once they complete their property screening process. As mentioned earlier, under the McKinney Act, homeless organizations that have been certified by the Department of Health and Human Services generally have priority over organizations not representing the homeless when requesting surplus government property. The property may be used to provide temporary housing for the homeless, alcohol and drug recovery centers, abuse shelters, and distribution facilities for food and clothing. The amount of property involved thus far is relatively small (see app. II for details). It amounts to about 500 acres (0.3 percent of the total property). The property includes about 1,600 family housing units (5 percent of the total) and 1,000 single housing units. At each of three California bases—Tustin Marine Corps Air Station, Fort Ord, and Long Beach Naval Station—plans call for homeless providers to receive more than 200 family housing units. Reuse authorities at 10 bases plan to request about 19,500 acres in economic development transfers. Under these provisions, reuse authorities can request property at no cost for economic development and job creation purposes. The local authorities can then lease or sell the property to companies that will create jobs. The net proceeds from leasing or selling this property are shared with the federal government—generally 60 percent for the community and 40 percent for the government. Rules implementing these new provisions will not be finalized until early 1995, and some local authorities are waiting until then to make final decisions on these conveyance requests. Another 9,400 acres have been planned for sale to the public. The disposition of the remaining 12,900 acres is still undetermined by local reuse authorities. This is the last step in the process. Land can be sold after all qualifying entities have decided they do not want the land. Communities are asking the federal government to provide (1) cash grants; (2) marketable revenue-producing properties, such as golf courses and housing units, to help pay for reuse activities; and (3) funds for upgrading buildings and infrastructure. Cash grants are available to communities through federal programs administered by such agencies as DOD’s Office of Economic Adjustment, the Federal Aviation Administration, the Department of Labor, and the Economic Development Administration in the Department of Commerce. As of May 1, 1994, the communities at the 37 bases we examined had received $107 million in federal grants to assist in developing and implementing reuse plans. According to DOD officials, most of the funds were provided by DOD to the administering agency because their use is related to a base closure. Additional grants are likely to be forthcoming. The Office of Economic Adjustment provides 3- to 5-year grants to local communities to develop and implement reuse plans. If the plan calls for a civilian airport, communities can request additional funds from the Federal Aviation Administration for airport planning and improvements. If infrastructure improvements are needed, communities can request grants from the Economic Development Administration. As of May 1994, the Office of Economic Adjustment had provided approximately $19.1 million to local authorities for reuse planning. The Federal Aviation Administration had provided $3.8 million, the Economic Development Administration $43.1 million, and the Department of Labor $40.5 million in grants. See appendix III for the distribution of grants for the 37 installations we reviewed. Most cash grants have gone to communities trying to establish civilian airports. The local reuse authority at Eaker Air Force Base has received $1.7 million from the Office of Economic Adjustment thus far, and a DOD official projected that funding may be required for 6 years. The Economic Development Administration grants at Wurtsmith Air Force Base amounted to $9.7 million to tie the base water supply and sewer to the municipal system, shut down base wells, and construct large water intakes from Lake Huron. Communities are also asking DOD to provide property that can easily generate revenue to support reuse activities unrelated to the property. Community officials say they need revenue-generating properties, such as golf courses and family housing units, to help fund operating expenses while they implement their reuse plans, such as airports or educational institutions. At England Air Force Base, local authorities are asking for the entire base, including family housing units and a golf course, to help support the airport. The reuse plan predicts it will be at least 10 years before the airport will be self-sustaining. At Fort Ord, officials of the prospective California State University, Monterey Bay, plan to lease 1,250 units of family housing to support university operations. A Fort Ord housing official stated that the university is also asking for the profits that DOD has received from leasing the housing prior to its conveyance. At some installations, local reuse authorities, educational institutions, and other reuse groups are seeking federal funds to renovate buildings, upgrade utility systems, construct roads, or improve other infrastructure for properties being conveyed. At Fort Ord, $15 million was appropriated out of DOD’s operations and maintenance accounts to renovate buildings at California State University, Monterey Bay. A university official estimated that an additional $140 million is needed from the federal government over the next 10 years to complete renovations. The state is providing $12 million in operating funds for the campus. The official said that, along with the conveyance of the requested land and buildings at no cost, federal funds for the renovation of buildings were essential for the campus to become a reality, and continued federal support will be needed until the California economy improves. California voters recently rejected a ballot proposition that would have provided authority to issue bonds and use the proceeds to construct or renovate buildings and acquire related fixtures at the state’s colleges and universities. According to a base official, the Army spent $69 million at the Presidio of San Francisco to renovate infrastructure and buildings prior to the installation’s transfer to the National Park Service. At Castle Air Force Base, base closure officials reported that the gas distribution system on that installation will have to be rebuilt, the sewer and electrical systems upgraded, and buildings brought into compliance with state and federal standards, such as the Americans With Disabilities Act. DOD has so far funded an Economic Development Administration grant of $3.5 million to connect the installation with the municipal sewer system. A community official estimated that $200 million would have to be spent at Tustin Marine Corps Air Station on access roads and other infrastructure improvements to enable development of the installation. The community is asking that the costs of such improvements be subtracted from the federal government’s revenue from the sale of the property. Reuse planning and disposition of property at closing bases have been delayed for a number of reasons. Disagreements between various agencies and jurisdictions have stalled reuse decisions at some bases. Some communities are waiting until regulations are established implementing new property disposition provisions before finalizing their reuse plans. DOD responsibility for environmental cleanup further delays disposal of base property. DOD has the discretion to determine what the highest and best use for the property is and relies heavily on local reuse plans to make this determination. The one exception is that DOD officials maintain that they cannot deny homeless requests that are approved by the Department of Health and Human Services. When conflicts arise, DOD base closure officials urge agencies and local officials to try to reach an accommodation at the local level. DOD officials urge local communities to form a single reuse authority and unite behind a single reuse plan. In several cases, jurisdictional and reuse disputes within the local community have delayed base conversion. For example, at George and Myrtle Beach Air Force Bases disputes between cities and counties over who should have the reuse authority and how large the airport should be have been major problems. At Myrtle Beach, the state of South Carolina finally intervened to establish a single reuse authority and determine what the reuse plan should be. Indian groups have expressed interest in acquiring property at 14 of the 37 bases we reviewed (see fig. 6 for locations). These requests include use of base property for education and job training, cultural and craft centers, housing, health facilities, economic development, and casinos. At seven of the bases, the Department of Interior has requested property that would be held in trust by the Bureau of Indian Affairs for tribal programs by local Indian groups. At the time of our review, DOD had not approved any of the Indian groups’ requests, nor had DOD determined whether requests through the Department of Interior should be given federal agency priority consideration. Thus, property disposition decisions at these bases have been delayed. In some cases, Indian groups were not represented on local reuse committees, and Indian requests and local reuse plans were in conflict. Furthermore, the Indian groups maintain that they should have sovereignty over property they receive, while the local jurisdictions want to maintain zoning and land use control. In Seattle, the Muckleshoot Tribe has requested a major portion of Puget Sound Naval Station, which the city plans to use to house the homeless and hold recreation, cultural, and other activities. In several cases, federal agency requests conflicted with local reuse plans. While these conflicts can delay local reuse planning, they are usually resolved through negotiation between DOD and the community. For example, at Williams Air Force Base, the Army Reserve requested property that local officials said was essential for their planned educational consortium. At the time of our review, this case remained unresolved but subsequently the Reserve and local officials came to mutual agreement. Disputes between communities and homeless providers over the extent of base property to be conveyed for the homeless have led to delays at some bases. The Department of Health and Human Services could deny homeless provider requests if it determined the provider lacked experience or financial viability for such a program. However, in deciding whether to approve homeless requests, the Department of Health and Human Services officials believed the McKinney Act gave them no discretion to consider whether the request would disrupt the local reuse plan. In October 1994, Congress passed and sent to the President for signature legislation that would give communities more flexibility in developing plans to meet homeless needs and federal agencies more discretion in approving such plans. The new legislation allows communities to develop reuse plans that incorporate the needs of the homeless either at the base or elsewhere in the community. If the Secretary of Housing and Urban Development determines that the community’s plan provides a reasonable amount of property and assistance to meet the needs of the homeless, then direct applications by homeless assistance providers to the federal government under the McKinney Act for base property would be eliminated. In 1993, Congress passed legislation to expedite the base conversion process and support economic development in communities facing base closures. Communities in the midst of reuse planning had to choose whether to continue under the old base conversion procedures or to request to come under the new provisions. Many decided to delay decisions until implementing regulations were finalized. DOD issued interim rules in April 1994, and DOD officials expected the rules to be finalized in early 1995. DOD officials told those communities deciding to request economic development transfers under the new rules they would have to also go through an additional McKinney Act homeless screening under the new rules, which could add an additional 8 months to the process. Furthermore, several communities requested delays in federal approval of homeless requests until congressional action was completed on the amendment to the McKinney Act. All the closing bases we visited had environmental cleanup that needed to be done, which in many cases is the most difficult obstacle to getting property into productive reuse. Generally, base property cannot be transferred until cleanup is completed or the government warrants in its deed that all environmental remediation measures are in place. However, DOD has the authority to transfer property for the cost of cleanup to any person who agrees to perform the environmental restoration. In a related assignment, we will report on the difficulties in cleaning up bases, the effect of environmental contamination on DOD’s ability to transfer property, the federal government’s liabilities from environmental contamination, and DOD’s long-term plans for addressing environmental problems at closing bases. We collected information from 37 of the 120 installations closed by the 1988 and 1991 Base Closure Commissions. These bases were selected because they were, for the most part, the larger installations and they had base transition coordinators assigned by DOD. Our review included 12 closures by the 1988 Commission and 25 closures by the 1991 Commission. The closures involve the disposal of 192,000 acres of land in 21 states. We performed our work at the DOD Base Transition Office, the Office of Economic Adjustment, and the military services’ headquarters in Washington, D.C., area. We also contacted base closure and community officials at the 37 closed bases. We visited Pease, Chanute, and Eaker Air Force Bases; Forts Sheridan and Ord; Chase Field Naval Air Station; and Naval Station Puget Sound (Sand Point). We also visited offices of the Federal Aviation Administration, the Economic Development Administration and the General Services Administration to discuss issues involving base closure. We reviewed the most recent land sales data from the military services’ base closure offices. We compared the 6-year land revenue estimates from DOD’s base realignment and closure fiscal years 1991-95 budget justifications for BRAC-I (the bases closed in 1988) and its fiscal years 1993-95 justifications for BRAC-II (the bases closed in 1991). To determine the current plans for reusing property at closing military installations, we reviewed community reuse plans where available and interviewed base transition coordinators, community representatives, and DOD officials. Where community reuse plans had changed or were not available, we identified the most likely reuses planned by these parties. When the parties involved disagreed over reuse plans, we categorized the property as undetermined. As requested, we did not obtain written agency comments. However, we discussed the report’s contents with DOD officials and their comments have been incorporated where appropriate. Our review was performed between July 1993 and September 1994 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Directors of the Defense Logistics Agency and the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Children’s home, gym Administrative offices, other buildings (continued) We have issued the following reports related to military base closures and realignments: Military Bases: Letters and Requests Received on Proposed Closures and Realignments (GAO/NSIAD-93-173S, May 25, 1993). Military Bases: Army’s Planned Consolidation of Research, Development, Test and Evaluation (GAO/NSIAD-93-150, Apr. 29, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closure and Realignments (GAO/T-NSIAD-93-11, Apr. 19, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173, Apr. 15, 1993). Military Bases: Revised Cost and Savings Estimates for 1988 and 1991 Closures and Realignments (GAO/NSIAD-93-161, Mar. 31, 1993). Military Bases: Transfer of Pease Air Force Base Slowed by Environmental Concerns (GAO/NSIAD-93-111FS, Feb. 3, 1993). Military Bases: Army Revised Cost Estimates for the Rock Island and Other Realignments to Redstone (GAO/NSIAD-93-59FS, Nov. 23, 1992). Military Bases: Navy’s Planned Consolidation of RDT&E Activities (GAO/NSIAD-92-316, Aug. 20, 1992). Military Bases: Letters and Requests Received on Proposed Closures and Realignments (GAO/NSIAD-91-224S, May 17, 1991). Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments (GAO/NSIAD-91-224, May 15, 1991). Military Bases: Processes Used for 1990 Base Closure and Realignment Proposals (GAO/NSIAD-91-177, Mar. 29, 1991). Military Bases: Varied Processes Used in Proposing Base Closures and Realignments (GAO/NSIAD-91-133, Mar. 1, 1991). Military Bases: Process Used by Services for January 1990 Base Closure and Realignment Proposals (GAO/NSIAD-91-109, Jan. 7, 1991). Military Bases: Relocating the Naval Air Station Agana’s Operations (GAO/NSIAD-91-83, Dec. 31, 1990). Military Bases: Information on Air Logistics Centers (GAO/NSIAD-90-287FS, Sept. 10, 1990). Military Bases: Response to Questions on the Realignment of Forts Devens and Huachuca (GAO/NSIAD-90-235, Aug. 7, 1990). Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations (GAO/NSIAD-90-42, Nov. 29, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO provided information on the Department of Defense's (DOD) projected revenues from property sales from closed military bases, focusing on the: (1) revenues the government has received and expects to receive from military base property sales; (2) amount of additional resources the government has given to support communities' reuse plans; and (3) factors which delay the transfer of property to communities. GAO found that: (1) revenues from military base property sales are expected to be far less than DOD anticipated; (2) the majority of the disposed property will be retained by DOD or transferred to other federal agencies, states, and localities at no cost; (3) where Congress has not specifically authorized a property transfer without reimbursement, it has specified that agencies receiving transferred property should reimburse DOD for 100 percent of the property's estimated fair-market value or acquire a reimbursement waiver; (4) $69.4 million of the projected $92 million in revenues from military base property sales has been realized and an additional $22.2 million is expected from pending property sales; (5) about $19 million in property sales has been the result of interagency transfers; (6) DOD could increase its revenues by selling an additional 9,400 acres of military property; (7) DOD continues to reduce its property sales revenue estimates as it obtains better property value and property availability information; (8) in addition to transfers of large portions of land at no cost, many communities have asked the government for cash grants, marketable revenue-producing properties, and building and infrastructure upgrades; (9) as of May 1994, 37 communities have received $107 million in cash grants; (10) additional funding requirements will increase as the base closure process continues; and (11) reasons for the delays in property transfers include disagreements over reuse plans between competing interests, changing laws and regulations, and unresolved environmental cleanup efforts at some bases.
|
Advances in telecommunications technology have the potential to provide new and improved services to people no matter where they live. For example, students in rural areas of Iowa are being taught Russian, music, and calculus by teachers in distant urban centers through two-way video communications. North Carolina has begun to link rural and urban hospitals to provide rural sites with access to medical specialists via video. A telephone company in Nebraska has created jobs in a small rural town by establishing a nationwide telemarketing business. Modern telecommunications can thus be used both to improve the delivery of services and to promote economic development. In figure 1.1, a technician at a hospital in Des Moines is transmitting an echocardiogram to be read by a specialist at the University of Iowa hospitals in Iowa City, Iowa—100 miles away. Using advanced telecommunications instead of sending a tape by a 2-hour courier trip results in a quicker diagnosis and more timely treatment for the patient. While services such as two-way video are offered in some places in the United States today, they are not widely available because the current telecommunications infrastructure, notably the telephone system, was not designed to provide them. Billions of dollars worth of infrastructure improvements would be needed in order to quickly transmit data and high-quality video images throughout the nation. Some state governments are currently looking for ways to accelerate this investment and ensure that services will be affordable and widely available to their residents. The experiences of the states that have begun this process can provide critical information to federal policymakers and to other states as they revise their telecommunications policies and seek to develop a modern telecommunications infrastructure. Historically, private investors have financed the building of the United States’ telephone system, the most widely available form of telecommunications infrastructure. This system now provides services to over 93 million American households. As of 1994, about 94 percent of American households had access to basic telephone services. Telephone companies are already improving their infrastructure to be able to provide advanced telecommunications services. This investment is occurring mainly in business districts and more densely populated residential areas. Profit incentives are not high for companies to provide such service in rural areas, where there are fewer businesses and the cost of delivering services is usually higher, unless financial support is available or cost averaging is applied. It is likely that private investment in advanced telecommunications will be slower in rural areas as well. Recent studies by the Department of Commerce and Office of Technology Assessment found that the use of telecommunications can be particularly beneficial to rural areas, where the population density is low. However, the distances between people in rural areas also increase the cost of providing these services. Some industry observers expect increased competition to lead to lower prices and more choices in telephone service. Others point out, however, that competition is less likely to develop in rural areas and that customers in these areas may be faced with higher prices because without subsidies or cost averaging, the prices for telecommunications services will likely reflect the higher cost of providing service there. Advanced telecommunications services can be provided, in part, by upgrading the current telephone system’s infrastructure to increase the capacity, or “bandwidth” of the telephone lines and switches. These upgrades include powerful new computer switches, complex software, and fiber optic cables that combine to form a high capacity, “broadband” telecommunications infrastructure. The technologies that can be used for upgrades are diverse. For instance, replacing existing copper telephone lines with new fiber optic lines can dramatically increase capacity, enabling the lines to carry many thousands of times more data. In addition to telephone lines, other kinds of technologies—including satellites, cellular telephones, and cable television systems—can transmit information as part of the telecommunications infrastructure. Besides the infrastructure needed to move information over distances, advanced telecommunications depend on two other elements—on-site equipment and switches that have been upgraded to handle larger amounts of information. Figure 1.2 illustrates these components of a network. The equipment at the originating site turns the information generated by the user, such as sounds, words, and pictures, into a form that can be transmitted. The switches route the transmission to its destination through cables or some other transmission channel. Once the transmission arrives at its destination, other types of on-site equipment convert the transmission back into the same usable form of sounds, words, or pictures. The President recently signed legislation reforming federal telecommunications law. This new law envisions a telecommunications industry in which a variety of companies—local telephone, long-distance, cable television, and wireless—can offer similar services and compete with one another. For example, the new law allows competition for local telephone services. While promoting deregulation, this law seeks to preserve and advance the concept of “universal service”—affordable and widely available telephone service. Universal service has been a federal goal since the enactment of the Communications Act of 1934, and federal and state governments have supported this goal through a series of subsidies and other types of assistance. The effect of this policy has been to make telephone service more affordable for residential customers and rural users. The new law provides for the establishment of a joint federal-state board to make recommendations to the Federal Communications Commission on the steps necessary to preserve and advance the goal of universal service. At the state level, officials have discussed the value of advanced telecommunications services in national forums such as the National Governors’ Association and the National Conference of State Legislators. They envision using advanced telecommunications to provide education, health care, and other public services more effectively and more equitably (see fig. 1.3). They also believe these services will make their states more attractive to new and expanding businesses and allow their rural residents to participate more fully in state government. As a result, leaders in state governments are looking for ways to accelerate the development of the telecommunications infrastructure. Three states with significant rural populations—Iowa, Nebraska and North Carolina—have been cited as leaders in the development of statewide advanced telecommunications services. Recognizing that decisions about private investment for improving the telecommunications infrastructure are driven by market circumstances, officials in these states have worked with the private sector and with potential users to encourage private investment and ensure the availability of service in less densely populated rural areas. Table 1.1 shows the demographics of these three states relative to the nation as a whole. Iowa is a midsized agricultural state with a population of about 2.8 million. The state has a large number of midsized towns—ranging from 8,000 to 10,000 people—which are fairly equally distributed in the eastern two-thirds of the state. The state also has about 100,000 farms. Of Iowa’s 99 counties, 88 are considered rural. Iowa’s primary goals for a statewide telecommunications network were improving educational services and equalizing educational resources, such as the course offerings available at urban and rural educational facilities. Iowa selected a system based on high-capacity fiber optic technology and SONET software that was capable of transmitting voice, data, and two-way interactive video. This technology provides high-quality pictures that let students and teachers see each other clearly. Nebraska is a predominantly agricultural state with a scattered population. Sixty percent of the state’s 1.6 million residents are located in four major cities; the rest live in small and midsized communities that are often distant from each other. The western parts of the state are sparsely populated. Of the state’s 93 counties, 88 are considered rural, and 10 of the 25 counties with the smallest populations in the nation are located in Nebraska. Nebraska’s first priority for its network was providing high-speed data services, such as Internet connections, at prices that the state’s small, rural schools and organizations could afford. The frame-relay technology that the state selected, streamlines data transmissions and allows data to travel more quickly and cost-effectively than other alternatives. The state has also created a video network that community organizations can use for meetings, hearings, and training sessions, using leased T-1 lines. The “compressed” video technology selected for the network reduces the bandwidth needed to send pictures and the cost of transmission. However, the resulting video images are often seen as jerky or blurred. About half of North Carolina’s 7 million residents live in midsized towns found along a central corridor stretching east from the state’s largest city, Charlotte, to the Atlantic coastline. This area includes the generally affluent Raleigh-Durham metropolitan area and Research Triangle Park, one of the nation’s leading centers for medical, electronic, and industrial research. The western part of the state is mountainous and forested, and many of the state’s least populated counties are found in this area. The coastal region also includes isolated towns. Of North Carolina’s 100 counties, 75 are considered rural. The primary objectives for North Carolina’s network were improving education and making North Carolina’s businesses more competitive. The state selected state-of-the-art technology: a high-capacity fiber optic network and advanced ATM switches that can connect a very large number of users and support very fast interactive video transmission to multiple users simultaneously. The costs of this advanced system were considered acceptable because state and private-sector officials believed that it would have a longer useful life than a system built with older technologies. The Chairman and Ranking Minority Member of the Senate Committee on Agriculture, Nutrition, and Forestry asked us to provide information on selected states that have started developing their telecommunications infrastructure, specifically (1) how these states encouraged private investment in improving their telecommunications infrastructure, (2) how they provided for increased and affordable access to advanced telecommunications services, and (3) what lessons their experiences could provide for others. To respond to this request, we conducted case studies of three states—Iowa, Nebraska, and North Carolina—that (1) include rural populations that constitute at least one-third of the state’s total population and (2) have made significant progress in deploying statewide advanced telecommunications systems. To answer the first two objectives, we used a case-study approach that included interviews with state and private-sector officials and reviews of state planning documents, audit reports, and network operation figures, as well as pertinent economic and demographic data for the states and the nation. To answer the second objective, we also examined the extent to which high schools in rural and urban areas have access to the states’ networks. We chose high schools because providing service to them was a goal in all three states. We relied on USDA for a determination of urban and rural counties and on the states’ data for a listing of connected and unconnected schools. To answer the third objective, we asked project participants in the three states what factors had helped or hindered their efforts; we combined this information with our observations and analysis to identify the lessons. We performed our work from June 1995 through February 1996 in accordance with generally accepted government auditing standards. We discussed a draft of this report with senior officials with responsibility for the networks in the three states we visited, as well as with NTIA officials. These officials generally agreed with the information presented and provided some information to clarify and update the report. A detailed discussion of their comments and our responses is included at the end of chapter 4. While all three states wanted to use advanced telecommunications to make services more accessible to their residents, each also wanted to avoid, if possible, the large-scale public expenditures that could be required to build the needed infrastructure. As a result, all three states encouraged the telephone companies operating in their states to invest in upgrading the existing networks more quickly so that the companies could make advanced telecommunications services available within the states’ time frames. Each of the states tried to encourage private investment through the use of long-term agreements whereby the state would purchase advanced telecommunications services from the telephone companies. At the time Iowa tried this strategy, uncertainties about the profitability of providing advanced services discouraged the telephone companies from accepting the risks of investing in the statewide network needed to provide these services. However, by the time Nebraska and North Carolina began their projects, the telephone companies had already begun to upgrade their facilities, by, for example, using more fiber optic lines. Also, having the states as long-term customers provided an income stream and reduced the risk of investment. Finally, by investing in their own infrastructure, companies could avoid competing with a state-owned facility. In 1987, Iowa began efforts to become the first state to create a fiber optic telecommunications network that would deliver services to classrooms throughout the state. The Iowa Public Broadcasting Board was directed to develop a design for a video network, and a formal request for private-sector proposals to construct the network was issued in 1988. According to state officials, the request had several technical flaws in it, and telephone company representatives were uncertain whether they would be able to recover the costs of building the system. Despite these uncertainties, the state received three bids to build the network. After reviewing these, the state announced its intent to award the contract to one of the companies. However, a challenge was filed and the intended award was overturned in March 1989. State officials ascribe the state telephone companies’ lack of interest in the project to several factors. These include doubts about the profitability of the network, a belief that it would be too expensive, and hesitancy to make investments in a long-term project that might not allow them to recover their investment in an acceptable time frame. These officials also told us that they believe that the state’s telephone companies were not prepared to make the internal policy decisions needed to make long-term lease agreements or ready to make infrastructure improvements as quickly as the state required. One telephone company cited as an inhibiting factor the cost and complications of assembling proposals for such an uncertain outcome. Another saw the level of investment, lack of a known customer base, and high technology required as substantial risks. In May of 1989, the state legislature passed a law providing the initial funding to build the Iowa Communications Network. This state-owned, statewide network was to be designed to provide video, voice, and data service to the state government and educational system. The proposal was not debated by the full legislature and was adopted on the last day of the legislative session. The staff responsible for the design of the network later told Iowa’s state auditor that they were not involved in the drafting of the provision until the final days of the legislative session and did not have sufficient time to analyze the proposed network or its costs. According to state officials, telephone company representatives were also excluded from this process. In December 1989, the state asked for proposals to build the network. Two companies bid on the project, but both bids were rejected as too costly, and the proposal was withdrawn. In October 1990, Iowa issued a third, more limited proposal intended to reduce the cost of building the network by, for example, including fewer sites. This proposal did not provide for the equipment or modifications necessary to fully carry the state government’s voice and data service. A contract to begin construction was awarded in April 1991, and $96 million in bonds were issued to finance the system. However, it was later determined that the state government’s telephone service needed to be included in order to generate sufficient cash flow for operations. To fund the resulting design modifications, the state was forced to issue a second set of bonds in 1993 for $18.5 million. Despite these difficulties, Iowa has now completed parts I and II of its network. The first part entailed installing a network control center at an armory in central Iowa and linking it to the state’s 15 community colleges, 3 state universities, and more than 25 private colleges; Iowa Public Television; and the state capital complex. The second part involved extending the network so that it was available in each of the state’s 99 counties. These two parts were completed by late in 1993. State officials estimated that Iowa had spent more than $100 million to build the network as of the end of 1993. Figure 2.1 shows the network Iowa built during these first two parts. Iowa began Part III of its network in early 1995. In this final part, Iowa will connect an additional 474 sites by 1999, including more than 350 schools and 87 libraries, at an estimated additional cost of about $95 million. Under Part III, Iowa is required by statute to lease fiber optic cable facilities from qualified private telecommunications providers. Thus, to connect the remaining sites, the state is contracting with private companies to provide the local connections. The state will pay the construction cost of installing the fiber circuit, then lease the circuit from the private provider for 7 years. State officials expect this arrangement to be especially beneficial to the smaller telephone companies. This arrangement also reduces the initial amount of capital that private companies need to participate in network development. Because of some legislators’ concerns about whether the state should own and operate a network, the legislature requested a study to examine alternatives, which ranged from retaining state ownership of the network to selling the network. On the basis of the study, the Iowa Telecommunications and Technology Commission, which manages the network, unanimously recommended retaining state ownership because it was the most practical option at the time. The legislature accepted this recommendation and, according to state officials, the legislature will restudy this issue in the year 2000. By the early 1990s, when Nebraska and North Carolina were beginning to seek private-sector assistance in providing advanced telecommunications services, the telephone companies were more receptive to cooperative arrangements because of changes that had occurred since Iowa began its project. According to private-sector officials we spoke with in both Nebraska and North Carolina, the telephone companies had already begun efforts to upgrade their facilities and were more willing to finance network development. To provide the services the states wanted, the companies had to, for example, replace copper wires with fiber optic lines and upgrade their switches. According to telephone company representatives in both Nebraska and North Carolina, the companies were already planning to make some of these improvements. For example, telephone company officials we spoke with said that their companies were increasing the use of fiber optic cable in their systems because it is more cost-effective and reliable than copper lines. Officials also told us that they had already begun to test and offer some advanced services, such as fast data service and video communications for education, in limited areas. Iowa’s experience also served as a motivating factor. By demonstrating that a state could build its own network, Iowa reduced some of the earlier uncertainties about cost and demand. However, according to telephone company officials in Nebraska and North Carolina, the telephone companies did not want the states to build networks that could compete with them for business, as Iowa had done. Participants in North Carolina identified prior experience with advanced telecommunications pilot projects involving public- and private-sector participants as a factor that helped convince companies to work with the state on its advanced network. There, the telephone companies had conducted several projects testing advanced telecommunications applications for schools and hospitals. These tests helped convince the companies that it was technically feasible to offer advanced telecommunications services on a larger scale. Participants also identified the positive working relationship developed during an upgrade of the state’s telephone system as a factor that built trust between the companies and the state government. According to participants in Nebraska, the reduction of state regulations on telephone service prompted the telephone companies to experiment by offering new services. The companies were more willing to offer such services in Nebraska, officials said, in order to demonstrate the benefits of deregulation to other states. In this environment, the long-term leases that Nebraska and North Carolina offered—called an “anchor-tenant” arrangement—helped convince the telephone companies that responding to their states’ proposals was in their best business interests. For example, as a result of a meeting with several state telephone companies, Nebraska’s Division of Communications has entered into 5-year agreements to buy frame-relay services at wholesale prices. At the same time, costs are reduced for the telephone companies because the state is performing functions, such as billing, that the company performs for other customers. North Carolina used a similar anchor-tenant arrangement to attract private investment. After deciding it wanted to make advanced telecommunications services available statewide, the State Controller’s Office asked the local telephone companies to help develop the technical specifications required for this network. It then struck formal agreements with three major local telephone companies and a long-distance company to build the infrastructure needed for its applications. In return, the state agreed to pay rates based on estimates of a certain level of use, which were derived from the original projections of the number of sites to be connected and their levels of connection time. These rates are reviewed every 2 years and can be adjusted to reflect the actual usage if the state and the companies agree. By basing their rates on projected usage and allowing for changes based on actual usage, the telephone companies could plan to recover their costs in a time period they thought was reasonable. According to participants in the projects in Nebraska and North Carolina, these long-term agreements between the states and the telephone companies benefited the companies in the following ways: Investment risk was reduced by ensuring a stream of revenues to help recover the costs of installing the hardware. The infrastructure that was upgraded is owned by the companies, and any capacity not committed to the state could be sold to other customers. (In North Carolina, officials estimate that 75 percent of the capacity of the upgraded network will be available for lease to private customers.) The presence of an advanced telecommunications infrastructure can serve as an economic development tool to help states attract new business and retain existing jobs—which means the companies will have more customers to sell their services to in the future. Although the telephone companies had begun to make some improvements to their systems, company representatives agreed that the states’ efforts encouraged them to make improvements faster than they would have on their own, especially in rural areas. Representatives of one of the Nebraska companies we interviewed estimated that they had invested $7.5 million in the state system by October 1995. The company expects its investment to rise to $14 million in the near term. Officials with the three telephone companies we spoke with in North Carolina estimated that they had invested about $43 million through August 1995 to upgrade their facilities. Two of the three North Carolina companies could not, however, estimate how much investment was due solely to the state’s efforts. The three states we visited agreed that making advanced telecommunications services available to public organizations was more practical than providing services to individual homes. They made services more affordable for users by providing funding for some local equipment and establishing lower prices for users than these users could obtain on their own. These policies were designed in part to address the concerns of rural residents, who could face higher prices because of the distances between rural communities and the smaller number of people living in them. While all three states have made progress in providing advanced telecommunications services to communities, they are still in the early stages of deploying their networks and plan to connect many more sites over the next several years. A review of the number of rural high schools connected in each state indicates that many are still waiting for connections. Although all of the states wanted to accelerate the pace at which services could be made widely available, they considered delivering advanced services to homes unfeasible and unnecessary. Instead, the three states decided to provide for increased and affordable advanced telecommunication services by locating access points in public buildings—such as schools, libraries, and hospitals—where the equipment could be used by many people. Each state has begun connecting sites at these locations. Table 3.1 illustrates the type and number of organizations that have been connected to the states’ networks. Figure 3.1 shows some ways in which advanced telecommunications are being used in each state. All of the states are giving special priority to improving education, and more than 500 schools in these states now have access to instructional resources located beyond their classrooms and buildings. All three states see the use of technology as a way to equalize educational opportunities between rural and urban areas. North Carolina, like several other states, is being sued over alleged inequities in the amount of funding available to school districts in different parts of the state. The state hopes that use of the network will help alleviate these concerns and that, once connected, smaller, poorer schools will have access to specialized educational service regardless of their resource base. All the states expect other types of users to benefit from access to the network. Iowa provides services to federal agencies, such as the U.S. Postal Service and Department of Veterans Affairs. Nebraska’s video network is open to community groups such as churches and chambers of commerce. Iowa and North Carolina are using their systems to conduct judicial hearings from remote locations. In addition, Iowa and Nebraska expect to use the availability of modern telecommunications as an economic development tool. Similarly, North Carolina hopes that making advanced services available to businesses will help the state attract and retain companies. In each of the states we studied, network users were expected to purchase and install the equipment needed to use the state networks. For example, in schools this equipment includes users’ equipment—cameras, monitors, and computers—and the network connection equipment that converts information, sounds, and pictures into a form that can be transmitted. In Iowa, local users are expected to pay for classroom equipment, but the state pays for network connection equipment for state and educational users, while federal government and medical users pay for their own connection equipment. North Carolina expects its sites to meet both costs. Nebraska expects sites to purchase the equipment needed to connect to the frame-relay system, but the state purchased the equipment for the video conferencing sites. In all three states, the sites use funds from a variety of sources to pay these costs, including capital budgets, grants, and private donations. Table 3.2 shows examples of the connection expenses that schools in each state must meet. Nebraska’s video costs are lower than Iowa’s and North Carolina’s, reflecting the state’s decision to use less-expensive technology. North Carolina’s State Controller believes that the communities’ expenses will decline as the state’s network technology matures and becomes more generally available. All three states found that some local sites needed assistance in paying for on-site equipment and offered such assistance using a variety of techniques. Iowa is using appropriated funds to help schools pay for local connection equipment. Nebraska has funded some educational connections through several sources. For example, it has created a School Technology Fund from funds available from a planned program to winterize the schools and proceeds from the state lottery. Grants from this fund will be used to help schools with small budgets pay to prepare rooms and connect with the frame-relay network. Also, the state’s Public Service Commission allowed telephone companies to use a tax windfall to help schools connect to the Internet instead of returning these funds directly to consumers. The North Carolina legislature created grants that can help local sites meet the cost of preparing rooms and connecting equipment. Of the first 132 sites planned to be connected in North Carolina, 115 received some form of state funding. States and communities have also used funds from federal programs to pay for users’ equipment and network connection equipment. For example, the Iowa National Guard used funds from the Department of Defense’s Advanced Research Projects Agency (ARPA) to link its 60 armories. North Carolina’s sites have also received grants from federal agencies, including ARPA, NTIA, and USDA. Table 3.3 lists examples of the use of federal assistance by states and localities for network development. According to an Iowa education official, federal funds have been key to Iowa’s ability to connect schools in a wide range of communities. A North Carolina official indicated that federal funds used for earlier state projects, such as a medical project partially funded by the National Science Foundation, contributed to their ability to plan and implement a statewide network. Two states—Iowa and North Carolina—are making the services more affordable by charging the same price for using the network at every location, even at remote locations that are more expensive to serve. According to the North Carolina Governor’s Office, North Carolina is committed to ensuring that those who need service most, including residents in remote rural areas, will not have to pay more for services than those in other regions. Iowa shares this commitment, stating that there will be no regional price penalties. As a result, residents in rural counties in Iowa and North Carolina can obtain services at the same rate as users in urban counties like those where Dubuque and Raleigh are located. Nebraska has not averaged rates for all of its users but has averaged costs for state agency users. The prices that local organizations pay for network services vary by state. Users in Nebraska pay lower fees than users in the other two states because Nebraska’s technology is less advanced. All of the states charge users by the hour to use their systems, but North Carolina also charges a fixed monthly fee. Table 3.4 illustrates the networks’ rates for services and how they are applied. In North Carolina, the state government is the telecommunication industry’s largest customer, and the state has used this position to purchase network services on behalf of other eligible network users. This strategy makes network use more affordable for local sites, allowing them to purchase network time at prices 25-30 percent lower than those available on the open market. The Nebraska state government is also purchasing large amounts of capacity and reselling it to regional educational facilities at prices that state officials said were lower than the facilities could negotiate by themselves. As a large customer, the state has also obtained discounts of approximately 50 percent from telephone companies for schools that are using the network. All three states have also used direct subsidies to make the services more affordable. Iowa currently subsidizes school sites, paying $35 of the $40 an hour that schools are charged for using the network. A raise in this rate resulted in a dramatic decrease in video usage, and the $5 rate was reinstated after school officials indicated that they were unwilling or unable to pay more. In order to encourage participation in the frame-relay network, Nebraska paid the usage charges for all of the regional educational facilities connected to this system during the first year of the network’s operation. In North Carolina, the legislature originally allowed state funds to be used for either site equipment or network costs. For the current fiscal year, the legislature approved funds averaging $2,800 per month for each site to subsidize network costs for those sites already connected to the network. Although the three states are still in the early stages of developing their networks, each has made progress in making advanced telecommunications services widely available to its citizens through organizations such as schools, hospitals, and government agencies. Iowa has completed two of the three parts of its project. As of October 31, 1995, it had connected 157 sites, and it plans to connect 474 more sites by 2000. Nebraska had connected over 400 schools (kindergarten through 12th grade) as of February 1996 and is working with a number of communities to help them develop demand for new applications. North Carolina has connected over 100 sites of the 800 sites the Governor’s Office estimated that the state would connect by the end of 1999. However, in 1995 the legislature prohibited the use of state funds to connect additional sites without further legislative approval. Despite this progress, much remains to be done to make affordable advanced telecommunications services widely available. For example, despite each state’s emphasis on improving and equalizing education, none of the three states had succeeded in connecting half of its high schools by November 1995. Nebraska had made the most progress, connecting 140 of its 300 high schools. More of the states’ unconnected schools are located in rural counties, where students may be distant from urban centers.These counties also contain more of the states’ high schools (see fig. 3.2). In Iowa and Nebraska, the connected high schools are spread fairly evenly throughout the states. In North Carolina, however, a larger number of counties do not have even one high school with access to its advanced telecommunications network. Figures 3.3, 3.4, and 3.5 show the geographic distribution of the connected high schools in each state by county. For all three states, maps showing the connected high schools relative to the total number of high schools in each county are presented in appendix I. The three states we studied recognized that planning and building an effective statewide advanced telecommunications network is an expensive undertaking that can require years to complete. Their experiences illustrate the importance of building and maintaining consensus among those parties that will be involved in constructing, financing, and using the network—the telecommunications companies, anticipated users, state legislators, and state executive branch officials. Addressing the concerns of these parties can help prevent the construction delays and increased costs that result from disagreements and financial constraints. These lessons can be used by other state policymakers as they begin or expand their own advanced telecommunications projects, as well as by federal policymakers who are considering what role the federal government should play in developing a national information infrastructure. Securing the involvement of the telecommunications companies, whose existing telephone and cable television systems can form the basis of an advanced telecommunications network, is a key step, participants told us. Without cooperation from these companies, a state can build its own network, as Iowa did, but it will incur substantial construction and maintenance costs. Company representatives stressed that a company will only invest in upgrading its infrastructure if it expects to recover its investment in a reasonable amount of time. Such investment did not occur in Iowa, where, according to state and private-sector officials, the companies viewed the project as risky and had doubts about the profitability of building the network. Several factors contributed to this assessment, including the perceived technological risk and uncertainty about whether other customers would pay for such services. Conversely, Nebraska and North Carolina were able to encourage private investment because they worked with the companies to ensure that their proposals made “business sense.” Both states involved the companies in decisions about the network’s design so they would know how much investment was needed to provide the anticipated services. In Nebraska, this process resulted in adopting a system using well-known technology, thus reducing both the initial investment and ongoing usage costs. In North Carolina, the companies agreed to provide the state with a system based on state-of-the-art technology that was more expensive to install and use but could have a longer useful life. In both cases, the states and the companies agreed on a system that they believed was technically feasible as well as cost-effective. Both also entered into long-term agreements with customers (namely, the state) to guarantee a stream of revenue that the companies could use to repay their initial investments and thereby reduce their risks. Also, by working with the state, the companies could prevent the introduction of a potential competitor by heading off the construction of a state-owned network like the one built by Iowa. Finally, the companies recognized that they could benefit from the networks by selling advanced services to private customers and by using the network to attract new customers and retain existing ones. Involving the potential users, including local educators and medical professionals, state agencies, and businesses and trade organizations such as chambers of commerce, is also important to ensure general agreement about what services the network should provide. If the system does not meet the needs of the anticipated users, deployment can be slowed, thereby increasing costs for those who are using the system. For example, while North Carolina involved potential users during the planning for its network, the project has experienced slower-than-anticipated acceptance by some users because of the high cost of using the system. One reason for this lower acceptance is that the system was designed to carry two-way video to multiple sites. However, some of the schools that the state anticipated would use the network wanted to buy only access to the Internet at higher speeds than were available over conventional telephone lines, which is a less expensive service to provide. As a result, some users were unwilling to pay for the capacity to send and receive video images, when they would rather have had less expensive data connections. Since the rates the state pays the telephone companies were based on estimates of use that have not been met, these rates, and ultimately the rates charged to users, could go up to allow the telephone companies to recover their investment, further discouraging use of the statewide network. In Iowa, the development of the network was delayed by a disagreement over what services to offer. While the network was always intended to provide video communications for the state’s schools, disagreement arose about whether it should carry telephone calls. Iowa’s state auditor found that this lack of agreement caused several design changes, which slowed the progress of the network. As discussed in chapter 3, paying for equipment to connect to the network and paying the ongoing usage charges can represent a substantial investment by local users. The states expected local users to pay the costs associated with connecting to and using their networks. However, each state currently offers some type of financial assistance to help pay some of these costs. Only one of three states, though, has approved enough funding to connect all the users it planned for. In Nebraska, the state plans to connect all elementary and high schools to the Internet by 2000. The state legislature has approved $13 million for this purpose from a fund originally created to winterize the schools. According to a state education official, this amount should be sufficient to pay for connecting all of the state’s schools to the Internet. Iowa enacted a plan to connect 474 sites to its network by 2000 but initially appropriated funds to pay for about 100 sites through fiscal year 1996. North Carolina has also approved state funds to assist users through 1996 but has not approved funds to assist current users in future years or to connect additional users. If the states do not commit additional funding, there is no guarantee that the sites that want to participate later will get the same assistance that the current sites are getting. As a result, some local sites may be less likely to connect to the networks if they have to pay more of the costs themselves. Some local organizations have also used grants from federal agencies to help pay for connection equipment. However, under recent proposals, some of the programs that provided these funds may be eliminated. For example, there are proposals pending in the Congress to eliminate funds for the Department of Education’s Star Schools program, which helped pay for classroom equipment in Iowa. In addition, NTIA’s information infrastructure grant program, which serves mainly rural and disadvantaged urban areas and provided grants in Nebraska and North Carolina, has been proposed for elimination. Should these proposals be carried out, local users would have fewer funding sources available to help pay the costs associated with using the advanced communications technology. Each of the states planned to complete its advanced telecommunications project over a number of years. In Iowa and North Carolina, where state funding was planned as a major source of resources for the project, it was necessary to request funding approval from the state legislature several times. Since both projects based their plans on future appropriations, they experienced delays when they did not receive the level of funds they anticipated. In Iowa, the legislature originally approved about $50 million over 5 years to construct that state’s network. However, a series of reductions and redirections reduced that amount by over 50 percent to $23 million. In a report, the state auditor found that these shortfalls could impede the progress of the project. North Carolina’s legislature approved $4.1 million for the project as requested in fiscal year 1993. In fiscal year 1994, the governor requested an additional $5.3 million for the project, but the legislature rescinded the original appropriation and approved $7 million. A report by the state auditor concluded that this uncertainty about funding left potential users in a quandary in trying to plan for participation in the network. More recently the legislature appropriated $2.5 million for use through June 1996—again less than requested by the governor. In addition to providing less funding than requested, the legislature explicitly prohibited the use of state funds to connect additional sites to the network without further legislative approval. As a result, North Carolina has been able to connect far fewer sites than it had planned. Although Nebraska also planned a multiyear project, it did not rely on appropriated state funds. Instead, it was able to identify funding for its project from other sources, such as lottery proceeds and a one-time tax refund to telephone companies. Each of the projects in the states we visited spanned a number of years—longer than the individual terms of office of any of the elected officials in those states. According to those we spoke with, having someone who could serve as an advocate for the program despite changes in political leadership was helpful to maintaining the government’s support for the project. For example, in Nebraska, the director of the Division of Communications, has worked on the project from its inception, through the governor’s two terms of office. Despite changes in legislative support, North Carolina’s project kept progressing, in part because of the efforts of the governor’s technology advisor, who had been involved in the design of the project since its inception. In Iowa, the governor has been in a position to advocate the state’s program for nearly 10 years as a result of being reelected to several consecutive terms in office. While an advocate can provide the vision that keeps the project on track, a lack of consistent and coordinated management can limit the effectiveness of the project. According to a report by Iowa’s state auditor, the lack of a consistent management structure was one of the problems that hindered the implementation of the state’s project. Responsibility for the project was initially split between the public television agency and the Division of General Services, which had three different administrators during the first 3 years of the project. It was not until 1994—3 years after network construction began—that the legislature enacted a formal management structure, the Iowa Telecommunications and Technology Commission, to oversee the network’s operations. Similarly, in North Carolina a 1992 performance audit report found that the state needed to restructure its governance of information technology and that it had not performed adequate planning for information technology. In response, the state formed the Information Resources Management Commission, which is responsible for setting state policy on information technology projects, including the statewide network. However, a 1995 report by the state auditor found that while progress had been made, the number of agencies and other organizations involved with the network raised the potential for problems due to a lack of coordination. The report recommended that the state’s technology-related functions be further consolidated. The state controller, who provides the staff for the commission, did not concur with this recommendation on the grounds that the commission was formed to perform this function and that it was too early to evaluate its effectiveness. We discussed a draft of this report with senior officials in the three states we visited, including the Chief Operating Officer, Iowa Communications Network, and the Education Policy Advisor, Office of the Governor of Iowa; the Director, Division of Communications, State of Nebraska; and the Advisor to the Governor for Policy, Budget, and Technology and the State Controller in North Carolina. Each provided comments to clarify and update the draft, and we incorporated them where appropriate. The Iowa officials commented that we had not included enough detail about the technical capabilities of their network or the applications it supported. Because our report is intended for a non-technical audience, we did not include the technical language they proposed. We did, however, add information in chapter 3 about how the network is used beyond the specific education and medical applications we identified. The officials also pointed out that the state legislature is currently considering a proposal to provide $150 million in educational technology funds. We did not include this information because the proposal had not been adopted as of February 23, 1996, and because the funds would not necessarily pay for costs related to the network. The North Carolina officials told us that the 3,400 sites originally identified as potential connections to the network were meant to represent the maximum potential sites that could be connected and, as such, are not the project’s current goal. They said that the only official recommendation for the number of sites to be connected is the 1993 Governor’s Office estimate of 800 sites to be connected by 1999. We changed the draft to reflect this clarification. Officials with the State Controller’s Office commented that there was no need to further consolidate state information technology management, as recommended by the state auditor. They said that reorganization was unnecessary because the recently created Information Resources Management Commission, which is housed in the Controller’s Office, already performs that function. The state auditor, however, identified several other organizations that still have responsibility in this area. Officials with the Controller’s Office confirmed that the responsibilities of the organizations identified in the state auditor’s report have not changed. We clarified our discussion of this issue and noted that the controller did not concur with the auditor’s recommendation. The Nebraska official who reviewed the draft provided clarifying comments and updated data. Officials with NTIA, including the Director, Public Broadcasting Division, Office of Telecommunications and Information Application, also reviewed the draft. They told us that it accurately portrayed NTIA and its program.
|
Pursuant to a congressional request, GAO provided information on three states' initiatives to promote increased access and private investment in advanced telecommunications. GAO found that: (1) Iowa, Nebraska, and North Carolina encouraged telephone companies to make private investment in advanced telecommunications infrastructure by offering to become their major customers; (2) in 1987, Iowa financed and built its own network, since the telephone companies were reluctant to make the investment needed to provide these services; (3) by 1990, telephone companies in the other two states began upgrading their systems on their own and were more willing to make investments, since they realized that long-term arrangements provided a steady income and states were better customers than competitors; (4) to provide affordable access to many people, all three states are making advanced telecommunications services available through sites located in public facilities; (5) some states and federal agencies are assisting local organizations by paying some of the equipment and connection costs and reducing the rates for using these services, even at remote locations; (6) all three states plan to make these services available to many more sites over the next several years; (7) the three states have given high priority to connecting high schools to the network, but more than half of the high schools in these states remain unconnected and even more rural high schools are not connected; and (8) building and maintaining consensus among interested parties, addressing the concerns of these parties, and ensuring a stable source of funding will be important to prevent construction delays, promote present and future widespread use by local organizations, and the successful implementation of the advanced telecommunications network.
|
Passenger trains, at one time the nation’s primary means of intercity transportation, began losing riders to automobiles in the 1920s and to airplanes in the 1950s and 1960s. Revenue losses on passenger services mounted, and in 1970, the final year in which these services were operated by private railroads, over $1.7 billion (in 1994 dollars) was lost on these services. Railroads had stopped investing in new passenger cars and facilities, and service declined, leading to the near demise of passenger rail service. To revitalize intercity passenger rail service, in 1970 the Congress created the National Railroad Passenger Corporation, commonly called Amtrak. Amtrak began operations on May 1, 1971, with the task of reversing the decline in intercity rail ridership and making up for years during which the railroads had neglected passenger operations. Until World War I, intercity travel generally meant train travel, and the railroads operated up to 20,000 passenger trains daily. After World War II, however, ridership declined dramatically, and by 1970 only about 300 intercity passenger trains were operated daily, providing only about 0.5 percent of intercity travel. In response to declining revenues, most railroads did not invest in new passenger equipment and allowed the service to deteriorate. Trains frequently failed to run on time, speeds were slow because of poor track and roadbeds, and equipment was old and prone to breakdowns. Intercity train travel was rapidly becoming extinct in the United States. The nation’s railroads, which focused primarily on freight service, had to obtain permission from federal or state regulators to abandon unprofitable passenger train services. Before passage of the Transportation Act of 1958, state regulatory commissions had the authority to discontinue passenger trains. Some state agencies allowed the railroads to discontinue unprofitable passenger operations, and between 1920 and 1958, service was discontinued for more than one-half of the railroads’ passenger miles.Because of the large decline in the number of trains, state commissions became more reluctant to allow the railroads to discontinue more services. The 1958 act transferred effective control over decisions to discontinue passenger services to the Interstate Commerce Commission (ICC). The ICC was concerned that continuing losses from passenger train operations could threaten the financial health of the entire railroad industry. It therefore allowed railroads to eliminate passenger operations that generated serious deficits. a for-profit corporation to provide intercity and commuter rail passenger service, employing innovative operating and marketing concepts so as to fully develop the potential of modern rail service in meeting the Nation’s intercity and commuter passenger transportation requirements. The Corporation will not be an agency or establishment of the U.S. Government. The Congress gave Amtrak specific goals to (1) provide modern, efficient intercity railroad passenger service; (2) help alleviate the overcrowding of airports, airways, and highways; and (3) give Americans an alternative to private automobiles and airplanes to meet their transportation needs. The act required Amtrak to operate a basic passenger rail system that considered the passenger service that existed at the time the act was passed as well as opportunities to provide additional rail service that would be faster, be more convenient, and serve more population centers at a lower cost. Fig. 1.1 shows Amtrak’s route network. Amtrak’s first full year of operation was 1972. Although the route system today is not much larger than the one operated in 1973, it now carries about 30 percent more passengers, serves 22 percent more stations, and operates about 29 percent more train miles. (See table 1.1.) In 1971, Amtrak took over routes from all but three of the railroads that were providing passenger service at the time. In return for relief from their obligation as common carriers to provide passenger service, the railroads turned over their passenger cars and locomotives to Amtrak. The original fleet of over 1,500 cars averaged more than 22 years in age. In 1994, Amtrak’s fleet of 1,900 cars still included about 435 of the original passenger cars. Amtrak eventually took over control of yards, stations, train service employees, and reservation offices. By 1976, most of Amtrak’s services, other than those provided by train and engine crews, were provided by Amtrak’s own employees. Using $120 million provided by the Congress, Amtrak purchased 103 miles of track between Philadelphia and Harrisburg, Pennsylvania; 83 miles joining Kalamazoo with Michigan City; 62 miles linking New Haven, Connecticut, and Springfield, Massachusetts; and 12 miles near Albany, New York. In addition, Amtrak acquired about 400 miles of the Northeast Corridor—between Washington, D.C., and Boston—from the estate of the bankrupt Penn Central railroad. The freight railroads, however, own the rest of the track over which Amtrak operates. The Rail Passenger Service Act grants Amtrak the right to use freight railroads’ tracks, and Amtrak is expected to pay the incremental costs that the freight railroads incur to maintain the tracks for passenger service.The freight railroads also provide dispatching and emergency services for Amtrak trains operating on their systems. In recent years, Amtrak has paid about $90 million annually for these services. Amtrak compensates the freight railroads according to individual agreements established in 1971.These agreements expire April 30, 1996. Amtrak’s primary function is operating the 25,000-mile intercity passenger rail route system that serves 44 states. Amtrak operates about 350 trains to serve 6 routes in the Northeast Corridor, 17 short-distance routes (under 500 miles), and 20 long-distance routes. Thirteen of these routes are partially funded by eight states under section 403(b) of the Rail Passenger Service Act; three of these routes are fully funded by two states under section 403(d) of the act. Amtrak generated $1.4 billion in revenues in fiscal year 1994 and had $2.4 billion in expenses. Its federal subsidy was $909 million. Passenger-related services generated 65 percent of Amtrak’s revenues in fiscal year 1994 (see fig. 1.2) but covered only about 38 percent of the expenses. Another 19 percent of Amtrak’s revenues came from commuter rail service, which Amtrak operates in seven metropolitan areas under contract to state or local transportation authorities. Finally, Amtrak earns revenues from other activities, such as real estate development (including 30th Street Station in Philadelphia). Other revenues come from handling specialized freight, including U.S. mail and certain express deliveries, and leasing rights-of-way along the Northeast Corridor to telecommunications firms for data transmission lines, among other things. Over half of Amtrak’s operating expenses in fiscal year 1994 were for salaries, wages, and benefits to employees. (See fig. 1.3.) Passenger-Related Services ($912.8 million) Commuter Services ($271.8 million) 7% Financial ($184.7 million) 6% Facility and Office Related ($153.0 million) 4% Advertising and Sales ($90.8 million) 6% Other ($152.7 million) Salaries and Wages ($855.8 million) Benefits ($449.7 million) Train Operations ($358.4 million) 10% Depreciation ($245.1 million) Since 1971, the federal government has provided Amtrak with over $13 billion to support intercity passenger rail service. Figure 1.4 shows the amounts appropriated to Amtrak over the past 8 years. In fiscal year 1994, Amtrak received federal funds through an operating and capital grant, a grant for the Northeast Corridor Improvement Program (NECIP), and a mandatory payment by the Federal Railroad Administration (FRA) to fund certain retirement and unemployment benefits. The capital grant pays for purchasing cars and locomotives; overhauling fully depreciated equipment (when the overhaul increases the value of the equipment); modifying equipment as required by law; upgrading facilities for maintenance, overhauls, and other work; and servicing debt. NECIP is a long-term capital improvement project that includes electrification of track between New Haven and Boston so that trains will be able to travel at speeds up to 150 miles per hour by the end of the century. NECIP is an ongoing project established in 1976 for the construction and upgrading of tracks, bridges, communications and signals, and electric traction between Washington, D.C., and Boston. Amtrak is required to participate in the railroad retirement and unemployment systems. Each participating railroad pays a portion of the costs for all retirement and unemployment benefits in the industry. Since Amtrak’s payments exceed the corporation’s specific retirement and unemployment costs, FRA has agreed to pay the excess costs for Amtrak. provided funding, but Amtrak’s management knew it was under pressure to reduce operating losses. Amtrak combined an aggressive cost-cutting strategy with new efforts at revenue enhancement to try to improve its revenue-to-expense ratio. This report’s objectives were to assess (1) Amtrak’s financial and operating conditions, (2) the likelihood that Amtrak can overcome its financial and operating problems, and (3) alternative actions that could be considered in deciding on Amtrak’s future mission and on commitments to fund the railroad. In this report, • chapter 2 describes the extent to which Amtrak’s financial condition has • chapter 3 describes the increased costs facing Amtrak over the next several years that will make recovery from the financial and operating problems difficult; • chapter 4 assesses the extent to which revenues are likely to increase, pay for these increased costs, and improve Amtrak’s financial condition; and • chapter 5 assesses the likelihood that Amtrak will be able to continue to provide nationwide service at the current federal subsidy level. It evaluates alternative levels of service and funding and the potential long-term results of these alternatives. We performed this work as part of our legislative responsibilities under the Rail Passenger Service Act to conduct performance audits of Amtrak’s activities and transactions. In addition, five congressional committees expressed interest in this work. The issues raised by the committees, along with our objectives, scope, and methodology, are discussed in detail in appendix I. The organizations that we contacted in the course of our review are listed in appendix II. We report dollars as constant 1994 dollars unless otherwise noted. We provided Amtrak with a draft of our report for comment. Amtrak’s principal observations and our responses to these observations are provided in chapter 5, and its written comments appear in full in appendix IV. We performed our work between August 1993 and December 1994 in accordance with generally accepted government auditing standards. We testified in early 1994 that Amtrak’s financial condition had deteriorated steadily since 1990, causing a decline in the quality of service. Since our testimony, Amtrak’s condition has worsened. As service quality has deteriorated, Amtrak has had more difficulty attracting and retaining riders, resulting in further revenue losses and exacerbating the problem. All of Amtrak’s intercity train service loses money—no route or portion of a route comes close to breaking even when capital costs are considered. For several years, revenues have been substantially less than forecast, resulting in larger operating deficits than budgeted. The operating deficits have exceeded federal operating subsidies by a total of about $175 million since 1990. Amtrak’s actions to reduce operating expenses have not been sufficient to offset the shortfall in revenues. To cover the gap between the operating deficits and federal operating subsidy, Amtrak has drawn down its cash resources. In 1987, its working capital had a positive $113 million balance; by the end of fiscal year 1994, the balance was a negative $227 million. Nevertheless, while its financial condition was deteriorating, Amtrak reported that its revenue-to-expense ratio was improving. This reported ratio was a misleading indicator of Amtrak’s financial condition because it omitted many important expenses and showed an improving trend when Amtrak’s financial condition was deteriorating. Amtrak’s management believes that the railroad’s financial condition will continue to deteriorate and estimated a cash shortfall of about $200 million by September 30, 1995. The situation had become so alarming by December 1994 that Amtrak’s Board of Directors approved large reductions in service and staffing. These actions, along with anticipated increases in productivity, are expected to eliminate the 1995 cash shortfall. However, even if these proposed actions are successful in bringing costs in line with revenues and grants for 1995, they will not solve Amtrak’s longer-term problems, such as the need for significant capital investment, which we discuss in chapter 3. Finally, without increased funding, Amtrak expects the gap between its deficit and the operating subsidy to reappear in 1996 and to produce a $1.3 billion shortfall through the year 2000. Over the last several years, the revenues that Amtrak projected have not materialized. As a result, the operating deficits have been larger than the federal operating subsidies. Although Amtrak cut planned spending and developed strategies to conserve cash, these actions have not compensated for the shortfall in revenues. Moreover, Amtrak has not specifically budgeted for operating contingencies that interrupt service, such as accidents and the flood in the Midwest in 1993. As a result of shortfalls in revenues and losses stemming from these events, Amtrak has had to draw down cash resources and take on additional short-term debt. Each year Amtrak forecasts ridership and revenues to develop its request for federal operating grants and to set planned spending levels. From fiscal year 1991 through fiscal year 1994, Amtrak overestimated passenger revenues by a total of $600 million. (See fig. 2.1.) In fiscal year 1994, actual passenger revenues were about 7 percent below those for fiscal year 1993 and about 22 percent less than forecast. Over the past 4 years, Amtrak has consistently overestimated its passenger revenues in terms of both ridership and yield (revenues per passenger mile). For its 1993 grant request, for example, Amtrak estimated $1.092 billion in passenger revenues—overestimating actual passenger revenues by $149 million. For 1990-94, Amtrak estimated that yield would increase with inflation. In fact, yield declined after adjusting for inflation. Management also contributed to overestimates of revenues by adjusting upward the results of a ridership forecasting model because of a belief that the nation’s general economic improvement would lead to increased ridership. Finally, in adjusting the model results, Amtrak did not consider the extent to which events—such as floods and accidents—might occur in the upcoming year, depressing revenues. In fact, during 1993 and 1994, Amtrak was involved in several major accidents and sometimes had to cancel trains because of severe weather, which caused revenues to decline. In preparing its budget for fiscal year 1995, Amtrak did not rely on its ridership forecasting model. Because it overestimated revenues, Amtrak underestimated its operating deficits and therefore requested smaller federal operating subsidies than it needed. In fiscal years 1990 through 1994, Amtrak’s operating deficit exceeded the railroad’s federal operating subsidy by an average of about $36 million per year. (See fig. 2.2.) This gap has forced Amtrak to reduce operations. These reductions have contributed to poorer service quality (fewer on-board service personnel) and delayed or deferred maintenance of equipment. Ultimately, such actions threaten Amtrak’s ability to provide high quality rail service and to compete effectively for customers. As a result of the shortfall in revenues and increased expenses, Amtrak’s net loss in fiscal year 1994 grew to over $l billion, while the railroad’s operating deficit exceeded the operating subsidy by $76 million. These operating losses, combined with capital expenditures, caused a cash shortfall of about $50 million as of September 30, 1994. For fiscal year 1995, Amtrak projects that the gap between its operating subsidy and operating deficit will grow to $193 million. Amtrak officials believe this situation will result from a 5-percent decrease in passenger revenues in 1995 and an $145 million increase in expenses. As a result of the gap between the operating subsidy and operating deficit along with planned capital expenditures, in the fall of 1994 Amtrak estimated a cash shortfall of about $200 million by September 30, 1995. In addition, Amtrak does not specifically budget for accidents and weather-related emergencies. The added costs of these problems exacerbate Amtrak’s deteriorating financial condition. For example, in September 1993 the Sunset Limited derailed in Saraland, Alabama, and Amtrak incurred (1) additional costs to repair equipment and pay liability judgments and (2) a shortfall in revenues as a result of equipment shortages. In addition, whenever service is disrupted because of weather or accidents, Amtrak pays the costs its passengers incur to get to their destinations by another mode of transportation. Amtrak has estimated that in 1993 and 1994, such problems cost about a total of $95 million in lost revenues and $11 million in additional expenses. To address the shortfall in revenues, Amtrak cut back budgeted expenses by reducing the number of management staff by 10 percent in 1991, by reducing train and station staffing levels in 1992, and by decreasing heavy maintenance programs for cars and locomotives in 1993. According to Amtrak, these cutbacks were expected to save about $77 million in fiscal years 1991-93. In 1993, Amtrak took steps to conserve cash by reducing inventory, requiring advance payment for work that Amtrak performed for others, and delaying payments made to others by 15 days. From the beginning of 1993 to the end of 1994, Amtrak’s investment in inventory declined from $147 million to $135 million. Finally, Amtrak requested and received a supplemental federal grant of $45 million ($20 million for operating expenses and $25 million for capital expenses) in fiscal year 1993. To offset the continuing shortfall in passenger revenues in fiscal year 1994, Amtrak again initiated actions to reduce budgeted expenses by about $90 million. These actions included reducing (1) the frequency of service on some routes, (2) the number of passenger support staff, and (3) general overhead costs. However, Amtrak’s actual expenses in 1994 exceeded budgeted expenses by over $100 million. By September 30, 1994, Amtrak had increased its short-term lines of credit to $120 million and borrowed $60 million. In addition, by that date Amtrak’s long-term debt for capital projects exceeded $650 million. Despite Amtrak’s actions, the gap between operating deficits and federal operating subsidies continued to grow. As a result, Amtrak drew down its working capital, which fell from $113 million in fiscal year 1987 to a $227 million deficit at the end of fiscal year 1994 (see fig. 2.3), a decline of $371 million in 1994 dollars. Continued reductions in working capital will jeopardize Amtrak’s ability to pay immediate expenses. Amtrak’s financial situation became so alarming that on December 14, 1994, Amtrak announced an aggressive plan to eliminate cash deficits in fiscal year 1995 and attempt to put the corporation on a sound financial footing. Amtrak’s management saw only two options—to abandon the nationwide intercity rail system or to minimize losses through improvements in productivity and reduction in routes and service. The Board of Directors directed that Amtrak eliminate 3 routes and segments of 10 others and reduce service frequencies (the number of trains per week on a given route). As a result, about 20 percent of Amtrak’s nationwide service (in train miles) is to be eliminated. Amtrak plans to negotiate with the affected states and could retain service where the states are willing to subsidize the losses on the routes. These reductions should allow Amtrak to eliminate about 5,000 jobs and remove most of its oldest cars from service. Amtrak expects that these actions will improve the on-time performance of its remaining trains and lower its operating costs. The plan also calls for generating additional revenues through adjusting ticket prices, developing commuter operations and other businesses, and selling or refinancing assets. These and other actions to enhance productivity are expected to produce an annualized net savings to Amtrak of $364 million, by reducing costs by about $430 million annually while forfeiting only $66 million in revenues. Amtrak expects these actions to result in savings of about $200 million in 1995, eliminating the expected cash deficit for 1995. These actions, however, will not eliminate Amtrak’s need for federal operating and capital grants now or in the future. Revenues will continue to fall short of expenses on most routes. Furthermore, collective bargaining and/or legislative changes might be required before approximately 26 percent of the anticipated savings can be achieved, according to Amtrak. In addition, the plan does not account for implementation costs, such as pay protection for employees whose jobs are eliminated as a result of route closures and capital investment needs that continue to grow. Finally, Amtrak projects that after 1995, if federal grants remain at current levels, the deficit will exceed operating grants by $1.3 billion through the year 2000. These issues are also discussed in chapters 3, 4, and 5. Amtrak has calculated and reported a revenue-to-expense ratio to demonstrate that its operating revenues were covering a larger share of its operating expenses from year to year. As figure 2.4 shows, this ratio improved from fiscal year 1982 to fiscal year 1991 but has since leveled off. Amtrak reported that revenues covered about 53 percent of expenses in fiscal year 1982 and about 80 percent in fiscal year 1993. However, in calculating this ratio, Amtrak did not include all relevant expenses. Because Amtrak deferred certain costs, including some maintenance expenses, the ratio gave a misleading picture of Amtrak’s financial health. Amtrak officials agreed with our assessment that this ratio should not be used and stopped reporting it in fiscal year 1994. In calculating the revenue-to-expense ratio, Amtrak has excluded certain expenses, including (1) depreciation, (2) mandatory payments for railroad retirement and unemployment insurance made by FRA on Amtrak’s behalf, (3) various federal and state taxes, (4) user fees paid to FRA for track inspections and other activities, (5) miscellaneous expenses relating to accident claims, (6) losses incurred in providing 403(b) service to the states, and (7) disbursements for labor protection. Amtrak is required by statute to exclude losses under section 403(b). It excluded other expenses for various reasons, including a belief that some items (such as labor protection payments) do not represent operating costs. If all these excluded expenses had been included, the ratio in 1993 would have been 66 percent—14 percentage points less than Amtrak reported. The revenue-to-expense ratio is a misleading indicator of a firm’s financial condition when the firm defers or forgoes expenses to show improved performance. Amtrak has deferred maintenance on rolling stock and equipment and reduced other expenses. While these actions have resulted in short-term improvements to the revenue-to-expense ratio, they have long-term implications for Amtrak’s viability. This ratio was also misleading because it showed an improving trend when Amtrak’s financial condition was deteriorating. Furthermore, as figure 2.5 illustrates, the overall gap between revenues and expenses has been widening since 1988. Amtrak’s financial condition has deteriorated over the last several years, and the problems accelerated in 1994. Actual revenues have been less than Amtrak projected and expenses have been higher than expected. As a result, federal subsidies have not fully covered Amtrak’s operating deficits. This condition has adversely affected Amtrak’s supply of cash, which is needed to pay bills and provide high quality service. Amtrak’s financial deterioration worsened in 1994 in the wake of accidents and lower-than-expected revenues. Amtrak had expected that the general economic recovery would improve its financial situation, but this did not occur. Although Amtrak reduced its budgeted expenses, continued declines in revenues, cash, and working capital threaten Amtrak’s ability to provide high quality intercity passenger rail service and compete effectively for customers. The service reductions and other actions that Amtrak plans to take in 1995, if implemented, will bring Amtrak’s costs in line with revenues and grants for that year. However, these planned actions will not solve the corporation’s longer-term problems, which we discuss in chapter 3. The Congress needs better information on how Amtrak estimates revenues from passenger service, not only to determine the amount of federal subsidy, if any, to provide but also to make decisions about Amtrak’s future. In chapter 5 we make recommendations to the President of Amtrak to improve the information provided to the Congress. The depletion of Amtrak’s physical assets poses an even greater threat to the railroad’s financial well-being than the current shortfalls in operating funds. Operating a passenger railroad is an inherently costly undertaking. The advanced age and poor condition of Amtrak’s rolling stock (locomotives and cars) and overhaul facilities make it expensive to maintain the fleet. While purchases of new equipment will ease the burden of maintenance and overhauls, the funds to pay for this equipment must come from already-strained capital budgets. Amtrak may also face substantial additional costs to comply with environmental laws and to pay freight railroads for the use of their track. Amtrak is unlikely to be able to address these challenges under the current operating environment and at the current funding level. Reductions in routes announced in December 1994 should allow Amtrak to retire most of its oldest passenger cars. But, according to new estimates, the investments needed in infrastructure will more than offset any savings. Today, Amtrak’s fleet is about as old as the aged fleet the corporation inherited from the freight railroads over two decades ago. (See fig. 3.1.) Aging equipment requires more extensive repairs, and Amtrak spends over $400 million annually to repair, maintain, and overhaul this equipment. Nevertheless, that amount understates Amtrak’s true costs for maintaining the equipment because Amtrak has routinely deferred equipment overhauls to cope with funding shortages and performed less comprehensive work when overhauling some cars. Amtrak introduced a new maintenance/overhaul program in 1994 to reduce this backlog and improve the fleet’s operating condition. However, this program is already underfunded, and new backlogs will inevitably result. A significant portion of Amtrak’s equipment—31 percent of the cars and 54 percent of the locomotives—is beyond its useful life. About 23 percent of Amtrak’s 1,900-car fleet consists of Heritage passenger cars that Amtrak obtained in 1971 from other railroads. These cars now average over 40 years old—much older than the 25-to 30-year average expected useful life estimated by Amtrak’s Chief Mechanical Officer. The remaining passenger cars (Amfleets, Horizons, Superliners, Turboliners, and Viewliners) in total average 14.2 years—about half-way through their useful life span. (See fig. 3.2.) 8% Amfleet II (12.0 years) 1% Capitoliner (27.0 years) 5% Horizon (5.0 years) Superliner I (14.0 years) 3% Superliner II (0.4 years) 2% Turboliner Coach (18.6 years) Baggage/Autocarrier (22.9 years) As equipment ages, it breaks down more often and requires more extensive repairs. In an August 1994 study, Amtrak found that failures of Heritage cars cause more train cancellations and longer delays than failures of any other type of car. Heritage cars that fail are out of service awaiting parts about three times longer on average than newer cars. Needed parts for Heritage cars must often be manufactured because they can no longer be purchased off the shelf. Moreover, because there are 27 different Heritage models, Amtrak cannot produce replacement parts in economical quantities. As a result, the cost per seat to perform a limited “intermediate” overhaul on a Heritage car can be 25 to 50 percent higher than the cost to undertake a complete heavy overhaul on a Superliner. Before December 1994, Amtrak planned to replace some Heritage cars as 245 new Superliner and Viewliner cars were delivered by 1997. About 200 Heritage cars would have remained in service. Now, however, Amtrak expects to retire all but a few “specialty” Heritage cars (diners, cab cars) as it reduces service frequencies and eliminates routes in fiscal year 1995. According to Amtrak, this will have a significant impact on the costs to maintain equipment and out-of-service rates. Before fiscal year 1994, Amtrak’s goal was to maintain all its cars through a program of periodic preventive maintenance and regular heavy overhauls (every 3 to 4 years). These overhauls can cost about $300,000 for each car. In comparison, a new car costs about $2 million. However, to cope with its deteriorating financial condition, Amtrak began deferring maintenance in the late 1980s. Amtrak also began using capital funds in 1992 to overhaul many older cars (Heritage and Amfleet I models) and locomotives. Even with the infusion of capital funds to overhaul these cars, overhauls were past due for nearly 40 percent of the fleet by September 1993. (See fig. 3.3.) As the backlog grew, Amtrak officials recognized that the railroad’s preventive maintenance/overhaul program was not adequate. Cars looked shabby and were breaking down with increasing regularity. When the fiscal year 1994 budget provided no increase in funding, the Mechanical Department began to implement a new “progressive maintenance” program in October 1993. Under this program, Amtrak performs heavy maintenance when a car breaks down and also a limited overhaul each year on every car. Every third year, the annual overhaul will be more comprehensive but still less extensive than the heavy overhaul performed before. Only Amtrak’s newer passenger cars will be maintained under this program. The remaining Heritage cars and Turboliner coaches will continue to receive preventive maintenance and/or limited overhauls as before. The progressive program is intended to keep more cars in service—thereby generating more revenues—but is not expected to reduce overall maintenance costs. By the end of fiscal year 1994, all Amfleet, Horizon, Superliner, and Viewliner cars were eligible for inclusion in the progressive program. However, Amtrak is already falling short of reaching its goals under this program. While 1994 was a transition year, fewer cars than required received annual overhauls because of budget reductions. Also, maintenance officials have found that the program is not appropriate for all these cars. For example, increased customer complaints about the condition of Amfleet cars in the Northeast Corridor have prompted Amtrak to return these cars to a scheduled maintenance program in fiscal year 1995, although they will continue to receive overhauls under the progressive program. Furthermore, the progressive program was intended to use only operating funds, but Amtrak has already found it necessary to allocate 1995 capital funds for the heavier 3-year overhauls of the Amfleet I cars. As Amtrak implements the progressive program, two of its three overhaul facilities—located at Bear, Delaware, and Beech Grove, Indiana—will need to handle significantly more cars than they did under the previous heavy overhaul program. Both facilities fell behind in performing overhauls in the past because funds were not available. The Bear facility is responsible for annual progressive overhauls on Amtrak’s 629 active Amfleet cars, which provide service primarily in the Northeast Corridor. In fiscal year 1993 (the last full year of heavy overhauls), the facility overhauled 59 cars, but it had overhauled as many as 152 cars per year in earlier years. Officials said that this facility has the physical capacity to perform the 629 overhauls required under the progressive program, but only if its production staff is increased by 90 from its current level of 183. The Beech Grove facility is responsible for overhauling nearly 1,200 cars and 265 locomotives that operate outside the Northeast Corridor. In fiscal year 1993, this facility performed heavy overhauls of 117 cars and 50 locomotives—about 47 percent less than the number required to meet the established overhaul cycles. During 1995, it would need to perform at least 166 heavy overhauls and 350 progressive overhauls of the cars while continuing to overhaul an average of 66 locomotives to keep up with the established schedule. However, the current budget for Beech Grove will fund only about 63 percent of these overhauls. Furthermore, Amtrak’s new Superliners and Viewliners will add to Beech Grove’s workload as they become eligible for annual overhauls starting in 1996 and 1997, respectively. While the per-unit cost of a progressive overhaul is substantially less than the cost of a heavy overhaul, the total cost to Amtrak to fully implement the new program will be greater because so many more cars will be overhauled each year. For fiscal year 1995, Amtrak needs an additional $31.2 million to do this work. The depletion of Amtrak’s physical assets—maintenance and overhaul facilities, rolling stock, rights-of-way in the Northeast Corridor, and other support assets—is perhaps a greater threat to the railroad’s financial well-being than the current shortfall in operating funds. Over the past 10 years, Amtrak’s equipment and facilities depreciated at the rate of $200 million per year, while investment averaged only $140 million annually. Amtrak currently estimates that even with the reduced route system announced in December 1994, it needs capital investment of over $4 billion to purchase rolling stock and to bring the infrastructure into a state of good repair. (See fig. 3.4.) This amount does not include money needed for equipment and facilities for high-speed service in the Northeast Corridor. Maintenance of way ($2.4 billion) 5% Maintenance of facilities ($200 million) Maintenance of way costs include repairs needed on the Northeast Corridor right-of-way but exclude funds needed for new facilities and for completion of the electrification project to allow high-speed service on the north end of the corridor. Payments on equipment purchased in 1991-93 reflect the amount needed for principal and capital interest from 1995 to 2004. Regular interest (paid out of Amtrak’s operating funds) is not included. Amtrak will need an additional $615 million for principal payments on this equipment between 2005 and 2017 to retire the debt. These capital needs total $4.3 billion. Amtrak’s capital subsidies have not been sufficient for the railroad to attempt this level of investment. Amtrak borrowed much of the capital needed to replace some of its oldest cars and locomotives with new equipment ordered in the early 1990s, and a significant portion of its annual capital subsidy must be used to pay this debt. The limited remaining capital funds are generally committed to short-term projects that enable Amtrak simply to keep its equipment operating and complying with federal laws. Very little is left over to invest in projects that might increase revenues, improve the efficiency of operations, or increase the capacity and productivity of overhaul facilities. The ability of Amtrak to improve its facilities, overhaul its oldest cars and most of its locomotives, and purchase new equipment depends wholly on its federal capital grant. The demands on these funds far exceed the grants provided over the past few years. Amtrak received $560 million in capital funding from 1992 to 1994. According to Amtrak, it must now set aside a sizable portion of the subsidy to pay for the rail equipment and computers it purchased and facility improvements it made with borrowed funds in the past, as well as for legally mandated equipment modifications and environmental cleanup efforts. Additionally, Amtrak uses an increasing amount of its capital funds to overhaul cars and locomotives, leaving only a small amount for all other capital replacement needs. (See fig. 3.5.) Amtrak allocated $14.6 million of its capital funding to overhauls in fiscal year 1992, $55.7 million in fiscal year 1993, and $70 million in fiscal year 1994. As a result, less money was available for capital investment in new equipment or new infrastructure. In its capital subsidy request for fiscal year 1995, Amtrak identified more than $800 million in needed capital spending, of which $195 million will be used to pay off debt, comply with federal laws on equipment modifications, or fund capital overhauls. However, only $230 million in capital funds was appropriated. In fiscal year 1995, Amtrak will have only $35 million to invest in new capital projects like facility renovations and major repairs to track and rights-of-way—down from more than $100 million in 1993. Furthermore, Amtrak recently recognized that the need to reinvest in the Northeast Corridor right-of-way (track, signals, and auxiliary structures) is becoming critical. It now estimates that at least $2.5 billion will be needed to bring this infrastructure into a state of good repair. Much of this investment is needed on the south end of the corridor, from Washington, D.C., to New York, where Amtrak operates high-speed trains and has captured the largest share of the transportation market. Amtrak will use $115 million of its $200 million NECIP appropriation for fiscal year 1995 to improve track, signals, structures (e.g., bridges), electric traction (catenary and related power structures), maintenance-of-way equipment, and tunnels in this part of the corridor. According to Amtrak’s Chief Engineer, however, these improvements are only a small fraction of what needs to be done. The remaining $85 million of the NECIP appropriation will be used for electrification and improving track and facilities on the north end of the corridor, as well as for purchasing high-speed train sets (see ch. 4). Amtrak owns and/or operates 18 facilities where cars and locomotives are maintained and overhauled. Amtrak has developed a 10-year master plan for improving these facilities so they can accommodate future requirements, including the new progressive maintenance/overhaul program. The plan is estimated to cost $326 million. Five facilities need substantial renovation and/or modernization—the three overhaul facilities at Beech Grove, Indiana, and Wilmington and Bear, Delaware, and two divisional repair facilities in Los Angeles and New York. Together, these facilities will require $262 million to renovate or replace structures, improve efficiencies in operation, repair damage caused by earthquakes, and build new structures. The facility at Beech Grove needs not only the increased funds to perform the progressive overhauls discussed above but also upgrades to its physical plant if it is to meet Amtrak’s future overhaul requirements. Beech Grove is responsible for overhauls of and repairs to 61 percent of Amtrak’s total fleet. The facility is nearly 100 years old and is in very poor condition. Much of the on-site track was installed in the early 1900s and has deteriorated, resulting in frequent derailments. The facility was not designed for production-line overhauls of cars. The buildings are run-down, and some cannot accommodate the work for which they are used. For example, Amtrak’s newest diesel locomotives are too large to fit inside the locomotive shop building, and the shop’s cranes are not large enough to lift the locomotives so that wheel sets can be removed. In 1990, Amtrak initiated a five-phase modernization plan to correct some of Beech Grove’s problems. By September 1993, about $12 million of the total cost of $47 million had been spent on such projects as combining the truck and forge shops, improving a coach production line, constructing employee welfare facilities (lunchrooms, rest rooms, and locker rooms), replacing roofs, and replacing or rehabilitating overhead cranes. In August 1994, Amtrak committed an additional $1.9 million to repair Beech Grove’s transfer table and repair or replace some of the track most critical to the facility’s operation. These projects will help Beech Grove perform its required overhauls and reduce derailments at the facility. (See fig. 3.6.) However, many renovations and modernization projects remain unfunded, including a new warehouse and distribution system for material, modifications to the locomotive shop, additional replacement and rehabilitation of track, and a new wheel shop. With the introduction of the progressive overhaul program, which presents even greater challenges to the facility’s operations, Beech Grove officials have drafted a new modernization plan to accomplish these projects and other work. First-year costs of $9.8 million have been budgeted for fiscal year 1995, but $28.4 million more is needed to complete the work. Of all Amtrak’s repair facilities, Sunnyside Yard in New York City is most in need of improvement. This facility has almost no maintenance structures. Virtually all work is performed outside, so that equipment and personnel are exposed to the elements. The single service building for performing minor maintenance (called “running repairs”) can accommodate only 8 cars at a time and is thus barely adequate for the 12 to 15 cars needing repair each day. The only locomotive repair building at Sunnyside was recently condemned because of chemical contamination; it will cost $550,000 to remove this building and the related contamination. In addition, because Sunnyside lacked auxiliary power hook-ups to allow car heaters to operate while the cars were being serviced in the yard, the plumbing systems on over 50 cars froze and broke during the winter of 1993-94. Repairs to these cars cost about $1.8 million. Amtrak estimates costs of more than $100 million for improvements at Sunnyside, including (1) constructing a new service and inspection building and a car-cleaning facility, (2) realigning track and constructing new storage track, and (3) completing necessary environmental cleanup. As stated previously, maintenance and repair costs remain high because of the large number of aging Heritage cars. Between 1991 and 1993, Amtrak purchased 245 Superliner and Viewliner cars and 72 locomotives for $743 million and $181 million, respectively. These cars should be cheaper to maintain than the Heritage cars they will replace because they have standardized parts and modular components to allow for easier repair. Amtrak estimates that it will save $341 annually in maintenance costs for every seat in a Heritage car replaced by a seat in a Superliner. Also, most of the cars are designed with many more seats per car than the cars they are replacing, potentially adding passenger revenues without adding more cars to the train. Amtrak believes that passengers will be more likely to travel by rail if they can ride in newer, more modern Amtrak cars. The new locomotives will provide greater power, will increase fuel efficiency, and should contribute to better on-time performance. However, in addition to the new equipment that is now being delivered, Amtrak has estimated it will need over 700 more new cars and locomotives in the future. Its current long-term equipment acquisition plan, approved in November 1992, includes estimates for replacing all remaining Heritage cars with Viewliners and replacing some of the nonpassenger cars and locomotives that are in poor condition or are nearing the end of their useful lives. Amtrak estimates that it will need to purchase 299 locomotives and 416 cars at a cost of about $1.5 billion from 1994 to 2002. While these needs will be reassessed following the service reductions in 1995, Amtrak officials believe that a large portion of the locomotive fleet will still need to be replaced within the next 5 years. According to Amtrak’s President, it is very difficult to commit to long-term capital projects because of uncertainty about future funding. As a result, Amtrak tends to focus on short-term operations and is less able to invest in projects that would create long-term operating efficiencies. Capital grants are appropriated annually and often fluctuate from year to year. (See fig. 3.7.) As stated above, a significant amount of money is needed just to repay debt and pay for capital overhauls and equipment modifications. Also, the Committees on Appropriations have, in recent years, placed restrictions on when Amtrak is allowed to withdraw its money from the Treasury, usually mandating that all withdrawals be delayed until the fourth quarter of the fiscal year. Labor costs could increase as Amtrak begins to renegotiate contracts in 1995 with the 14 labor unions that represent about 90 percent of Amtrak’s 25,000 employees. Amtrak estimated that wages increased between $120 and $140 million from 1991 to 1995 as a result of the last round of collective bargaining. Similar increases may result from the upcoming negotiations. For example, if wages and benefits increase by 4 percent per year to keep pace with inflation, Amtrak’s costs could increase by about $40 million per year, or about $200 million (in current-year dollars) over a 5-year period. The current contracts provide for most union employees to receive annual wage increases of about 4 percent between 1991 and 1995. Amtrak has reduced labor costs and increased productivity when possible. Its employees receive less compensation on average than freight railroad employees performing comparable jobs. For example, in 1992 Amtrak compensated train and engine crews about $42,900 per employee, while other Class I freight railroads compensated similar employees about $54,800 per employee. Also, since 1983 Amtrak has increased productivity by (1) adopting an 8-hour basis of pay for train and engine crews (before this change, crews earned a full day’s pay on the basis of the number of miles traveled), (2) eliminating the requirement for a fireman on trips of less than 4 hours, and (3) establishing a 5-year graduated entry wage for certain newly hired employees. Amtrak estimates that these changes have saved about $50 million per year. Amtrak’s costs in 1994 for salaries, wages, and benefits accounted for about 52 percent ($1.3 billion) of the railroad’s total operating expenses. Federal laws unique to the rail industry keep Amtrak’s labor costs higher than they would be otherwise. For example, in 1992 Amtrak paid about 26 percent of its payroll in retirement taxes to meet requirements of the Railroad Retirement Act of 1937; this percentage is similar to the rate paid by other railroads. In comparison, other industries paid only about 6 percent of their payroll in retirement costs and retirement-related savings plans. Also, in 1992 Amtrak paid about $0.67 per employee-hour worked for accident and injury claims under the Federal Employers Liability Act. This amount compares with about $0.36 per hour worked paid by private industry under state workers compensation systems. Labor costs could also increase if Amtrak eliminates entire routes or reduces service below three round-trips per week. Amtrak and freight railroad employees are covered by the Rail Passenger Service Act, which requires adoption of a labor protection agreement for employees affected when intercity rail passenger service is discontinued. Under the agreement, known as Appendix C-2 and adopted in 1973, employees who are dismissed may be eligible for payment of their average monthly compensation for up to 6 years or, at their option, may receive a separation allowance of up to 12 months’ pay. According to Amtrak, if the entire route system were shut down, the railroad would incur labor protection expenses of between $2.1 billion and $5.2 billion. Freight railroads own about 97 percent of the track over which Amtrak operates and provide such essential services as dispatching trains, making emergency repairs to Amtrak trains, and maintaining stations. These services are provided under operating agreements that Amtrak maintains with 18 railroads. On April 30, 1996, most of Amtrak’s initial 25-year operating agreements with freight railroads will expire, and new agreements must be negotiated. On the basis of our discussions with freight railroad officials, it is likely that Amtrak’s costs under the new agreements will increase substantially. Amtrak currently pays about $90 million per year in both base payments and incentive payments to freight railroads. (Incentive payments are additional amounts that railroads can earn when Amtrak trains operate on time.) (See fig. 3.8.) Freight railroad officials told us that compensation will be a key issue in negotiations with Amtrak. They believe that their companies are not adequately compensated for the services they currently provide to Amtrak. They offered several reasons. First, the methodology used to determine reimbursements for the incremental costs of maintaining track—that is, the extra costs of wear and tear on the track—does not adequately measure the costs resulting from Amtrak’s use of the track. Of the $90 million that Amtrak pays annually to freight railroads, about $20 million is for the incremental cost of maintaining the track, and Amtrak estimates that its costs for track maintenance could double if another methodology is used. Second, incentive payments do not consider delays caused by Amtrak’s trains—when an Amtrak locomotive fails, for example—in calculating on-time performance. Rather, these delays are held against the freight railroads and consequently limit the amount of incentives that they earn. Freight railroads may also seek higher compensation for “level-of-utility” requirements that expire with the operating agreements. Amtrak trains generally travel at higher speeds than freight trains. As a result, the track must be maintained to higher safety standards. This increased standard is referred to as a higher level of utility. In addition, according to freight railroad officials, Amtrak does not fully reimburse railroads for clearing freight traffic and interrupting maintenance-of-way work to allow Amtrak’s trains to proceed on time. Freight railroad officials also said that liability arrangements would be a key issue in negotiations. They are concerned about their liability in settling high-cost claims that result from passenger-train accidents occurring on their track. This concern arose after a Conrail train collided with an Amtrak train near Chase, Maryland, in January 1987; 16 people died and more than 350 people filed injury claims. In this case, Conrail paid about $94 million in personal injury and death claims. Freight railroad officials fear that similar situations could develop on their property. Under current operating agreements, Amtrak and the freight railroads use a “no-fault” liability arrangement, in which each party is responsible for paying for its own equipment and personnel. In addition, Amtrak pays freight railroads between $0.0367 and $0.0734 per train mile for the liability coverage that they must maintain in connection with providing Amtrak service. In fiscal year 1993, Amtrak paid about $2.8 million for purchased insurance. Freight railroad officials told us that they would like to see this arrangement changed. They suggested that either Amtrak assume full responsibility for accidents in which it is involved, regardless of who is at fault, and/or liability claims be capped and handled through a pooled insurance fund. In either case, Amtrak’s costs for liability protection could increase substantially. Although health care and postretirement benefits are currently a small part of Amtrak’s budget, they could become more significant in the near future. In January 1995, the health care premiums that Amtrak pays for its union employees are projected to increase by about $61 per employee per month. Amtrak estimates that this increase could boost operating expenses by about $25 million per year. In addition, Amtrak provides postretirement health care and life insurance benefits to salaried employees. For fiscal years 1992 and 1993, Amtrak paid postretirement benefits of $1.4 and $1.5 million, respectively. Amtrak estimates that cash outlays for these benefits will grow from $3 million in fiscal year 1994 to around $20 million by the year 2010. Amtrak can also expect higher costs associated with environmental cleanup. Amtrak’s current expenditures for environmental cleanup projects have ranged between $2 million and $8 million per year. However, according to Amtrak, this amount represents only a fraction of its known environmental problems. For example, Amtrak may have to spend between $17 and $69 million to bring the 69 fueling sites along its routes into compliance with the Clean Water Act, according to Amtrak officials. Amtrak also estimated that an additional $1 million per year over a 5-year period will be needed to eliminate asbestos at its stations and facilities. Other environmental projects are likely to increase costs further. Amtrak’s policy is to limit spending for environmental cleanup to high-priority projects that pose immediate dangers to the environment. At the end of fiscal year 1994, Amtrak recognized a $33 million liability for future costs for environmental cleanup. Amtrak acknowledged that costs would increase if it fully complied with environmental standards and requirements and implemented prevention and control measures. In the coming years, Amtrak faces increased costs that it can no longer defer or avoid. These costs include keeping its equipment in operating condition, purchasing new equipment, meeting its debt obligations, negotiating and paying for new labor agreements, paying reasonable fees to use other railroads’ track, and paying for increased employee benefits and environmental cleanup at Amtrak sites. Its past efforts to reduce costs, while necessary, have resulted in deterioration of equipment and some facilities. The need for improvements is becoming critical, and these improvements will not come cheaply. The new progressive maintenance/overhaul program, designed to keep more cars in service at a lower unit cost, will still cost over $30 million more to implement in fiscal year 1995 than was spent in 1994 and will eventually cost nearly $60 million more as new equipment enters the program. However, without this commitment, Amtrak risks growing backlogs of equipment in need of overhaul, depleting the number of cars available for service. The new cars and locomotives currently being delivered should ease the pressures, in both the short and long term, as older cars that are costly to maintain are replaced and more revenues per car are generated. By the turn of the century, however, this equipment will need to be overhauled each year, increasing the burden on Amtrak’s aging overhaul facilities. We believe that capital investment is important and will lower operating expenses in the long term by increasing productivity and improving efficiency. The Beech Grove overhaul facility could be made much more economical by replacing, repairing, and modernizing the track and structures that now impede efficient work there. Such investments could enable Amtrak to increase efficiencies and provide comfortable, modern, on-time service to its passengers. If the problems that have been driving customers away from Amtrak are reduced, demand for service may well increase. In chapter 4, we discuss the prospect that Amtrak can generate sufficient additional revenues to offset these increased costs. Amtrak’s revenues have never covered all the costs for providing intercity rail passenger service on any route. This gap, after narrowing during the 1980s, has again widened. Even as the economy has recovered, Amtrak’s ridership and revenues have not improved correspondingly. There are several opportunities for Amtrak to earn additional revenues, including completing a computerized system to maximize its revenues per seat, increasing the amount of compensation the states pay Amtrak for losses on services it provides in the states under section 403(b) of the Rail Passenger Service Act, introducing high-speed rail in selected corridors, and expanding commuter rail operations. However, none of these actions is likely to eliminate Amtrak’s operating deficit. Since 1990, Amtrak’s passenger revenues have fallen about 14 percent in real terms (see fig. 4.1)—from over $1 billion in 1990 to about $880 million in 1994—and riders have been experiencing more problems on their trips as Amtrak continues to operate aged equipment and to defer maintenance. Amtrak has cited a weak economy and intense price competition from the airlines as some of the key reasons for its poor performance. But deteriorating service quality and a spate of recent accidents have also contributed to declining revenues. Although Amtrak’s ridership generally increased during the 1970s, financial losses persisted, and in recent years Amtrak’s annual deficit has risen steadily. Since the 1980s, Amtrak’s ridership has remained relatively steady, fluctuating between 19 million and 22 million passengers annually. (See fig. 4.2.) To deal with the widening gap between revenues and expenses, Amtrak has had to rely heavily on cost reductions, because substantial revenue increases could not be expected with ridership stagnant and fares constrained by falling air fares and gasoline prices (when adjusted for inflation). As Amtrak cut expenses, the quality of service deteriorated, making increases in ridership even less likely. According to a June 1994 survey of Amtrak’s passengers, about 61 percent had experienced at least one problem during their trip. Both the proportion of passengers experiencing problems and the number of problems per passenger increased with the distance traveled. About 74 percent of passengers on western long-distance trains and about 67 percent of those on eastern long-distance trains experienced problems.However, even riders on Amtrak’s high-speed Metroliners encountered problems—roughly 44 percent of passengers reported problems. The most common problems cited in the customer survey concerned on-time performance. Late arrivals and departures accounted for about 21 percent of the problems that passengers cited. Dissatisfaction with the cleanliness of facilities was also common, accounting for about 10 percent of the problems mentioned. Between 1989 and 1992, overall on-time performance improved from 75 to 77 percent; performance on certain routes and for certain trains, however, remains poor. In 1993, Amtrak’s systemwide on-time performance declined to 72 percent. On one route—the Empire Builder, which runs between Chicago and Seattle—trains arrived on time only 4 percent of the time, according to Amtrak. Amtrak estimates that late arrivals and departures alone result in about $80 million in lost revenues annually. Altogether, Amtrak believes that it is losing over $300 million annually because of problems affecting customer satisfaction. The series of accidents involving Amtrak trains—one near Mobile, Alabama, in September 1993; others in Boise, Idaho; Kissimmee, Florida; and Gary, Indiana, during November and December 1993; a derailment at Selma, North Carolina, in May 1994; and an accident near Batavia, New York, in August 1994—appear to have resulted in temporary declines in ridership and thus revenue losses. After the September 1993 accident, Amtrak’s ridership nationwide fell by about 4 percent, affecting revenues for about 2 months. After the subsequent accidents, revenues dropped by as much as 15 percent. Revenue recovery took increasingly longer and occurred only after aggressive marketing campaigns. Competitive pressures have limited Amtrak’s ability to increase revenues by raising fares. From 1990 to 1993, Amtrak’s overall yield—revenue per passenger mile—fell by about 10 percent, after adjusting for inflation. The declines were larger, about 12 percent, for routes outside the Northeast Corridor. Yields on traffic in the Northeast Corridor—where Amtrak derives about one-half of its ridership—fell by about 6 percent. These falling yields represent fare reductions made in response to, among other things, lower fares on airlines and buses. Meanwhile, passenger yields on domestic airlines fell by about 12 percent from 1989 to 1993 and are now below Amtrak’s. Yields on intercity buses fell about 10 percent from 1989 to 1992 and are well below the yields of Amtrak and the airlines. Most intercity trips are made by private vehicles. Automobiles and other private vehicles account for about 80 percent of total passengers miles of intercity travel, while Amtrak represents only about 0.3 percent. Falling real gasoline prices continue to encourage people to drive. Since 1990, the real price of regular unleaded gasoline has declined by about 10 percent. This price decrease, combined with a general increase in the average fuel efficiency of new cars, continues to make driving a relatively inexpensive option, especially for leisure travel by families. Many people make the decision to drive on the basis of only the out-of-pocket cost of their trip. Many related costs, such as insurance and depreciation, are perceived as largely fixed and do not enter into the traveler’s choice of mode. Thus, lower fares for air and bus travel and lower gasoline prices combined to improve the relative attractiveness of the alternatives to taking the train, while the quality of train service declined. These trends have continued for some time, and Amtrak will find it difficult to overcome them. To increase revenues and improve yields, Amtrak is computerizing its yield management system, which allocates seats among several service classes. This system is similar to those used by the airlines to control seat inventories and maximize revenues. Amtrak does not want to sell a seat to a passenger traveling only a short distance on a long-distance train if by doing so it is unable to sell the seat to a long-distance passenger. Still, because most Amtrak passengers travel only on a portion of a route, Amtrak cannot reserve all the seats for long-distance travelers and must optimally manage its seat inventory. Moreover, because of the number of stops a train makes and the number of passengers boarding and deboarding at each stop, programming a yield management system for passenger trains is much more complicated than computerizing a system for airlines. As of July 1994, 100 long-distance trains were included in the computerized system, and Amtrak plans eventually to have 220 of its reserved-seat trains in the system. Because of budget constraints, the system is being implemented in phases. It was introduced in 1991 and is not expected to be completed until the end of fiscal year 1995. This system should help Amtrak maximize revenues, especially on heavily patronized long-distance trains during the peak travel seasons. However, it is not likely that the added revenues resulting from better yield management will make a substantial impact on Amtrak’s deficit because most trains are far from overbooked, and Amtrak has used a manual yield management system for some time. Amtrak as a whole loses money. However, to judge the financial performance of individual Amtrak routes, it is necessary to determine the revenues and costs associated with each route. Doing so can be difficult. When all costs, including administrative and capital costs, are fully allocated, all Amtrak routes—even the heavily traveled Northeast Corridor—generate sizable deficits. Amtrak also operates intercity passenger rail services that receive financial assistance from the states where they operate. The support from the states does not fully compensate Amtrak for the cost of operating these trains. Under its December 1994 business strategy, Amtrak will seek reimbursement from the states for all the losses incurred by these trains. As noted above, none of Amtrak’s routes are profitable if the costs are fully allocated, and only services in the Northeast Corridor and on a few special trains generate revenues that exceed avoidable costs. Some costs would cease if a route were discontinued or, conversely, start if a new service were introduced. These costs include short-term avoidable costs, such as those for train and engine crews and fuel. Other costs would not end immediately if a route were eliminated. These include not only long-term avoidable costs, such as expenses for heavy equipment maintenance and training, but also the short-term costs directly attributable to a route. Losses on individual routes vary depending on what costs Amtrak considers in the calculation. If costs are fully allocated, passenger revenues covered only about 54 percent of the costs in fiscal year 1993. In calculating fully allocated costs, Amtrak excludes certain expenses, including general and administrative overhead, interest, non-intercity passenger operations, and adjustments from previous periods. For short routes—those less than 500 miles—revenues covered 83 percent of the long-term avoidable costs, while revenues from long-distance service covered 75 percent of such costs. The Northeast Corridor made a positive contribution on the basis of long-term avoidable costs. If only short-term avoidable costs are considered, then both short- and long-distance routes outside the Northeast Corridor come close to covering their costs. (See table 4.1.) Part of the reason that services in the Northeast Corridor appear more profitable than services in other parts of the system is that Amtrak treats a significant portion (60 percent) of the costs to maintain track in the Northeast Corridor as fixed costs and therefore excludes them from the measures of avoidable costs. Elsewhere, Amtrak considers track maintenance costs as avoidable costs because they represent a contractual obligation between Amtrak and the freight railroads that own the track. Furthermore, the Northeast Corridor includes both conventional trains and high-speed Metroliner trains. While the Metroliners recover 90 percent of their fully allocated costs, the conventional trains do not perform as well. The Metroliners skew the overall cost recovery ratio of the Northeast Corridor so that it performs much better than other routes. (See table 4.2.) The Rail Passenger Service Act allows Amtrak to initiate and/or operate intercity rail service, known as 403(b) service, that is financially supported by the states. In fiscal year 1994, Amtrak had contracts with eight states to operate such service over 13 routes. This service accounted for about 14 percent of Amtrak’s ridership. Under the provisions of the act, the states pay 45 percent of operating losses for such service in the first year of operation and 65 percent of losses in subsequent years. For service that began before 1989, states reimburse Amtrak for short-term avoidable losses, while for service that began after 1989, states reimburse Amtrak for long-term avoidable losses. In fiscal year 1994, three of the eight states—Missouri, New York, and Michigan—reimbursed Amtrak for short-term avoidable losses; three states—Alabama, North Carolina, and Wisconsin—reimbursed Amtrak for long-term avoidable losses; and two states—California and Illinois—reimbursed on both long- and short-term bases, depending on the route. States also pay 50 percent of the capital costs (a calculation based on depreciation and interest) associated with the equipment used for this service. Any losses (capital or operating) not paid by the states are absorbed by Amtrak. For the most part, Amtrak uses its own equipment to provide this service. In fiscal year 1993 (the last year for which financial data on 403(b) service are available), Amtrak absorbed about $82 million in losses on section 403(b) services. This amount included about $78 million in operating costs and $4 million in capital costs. Amtrak absorbed such costs as heavy maintenance and overhaul of cars and locomotives, repairs following accidents, and an allocated portion of fixed costs (e.g., expenses to operate yards and stations and various overhead costs). The states paid about $26 million. Amtrak absorbs other costs from the service as well. For example, Amtrak’s use of equipment for section 403(b) service precludes its use on other intercity routes where equipment shortages could occur. Amtrak is not reimbursed for these lost opportunity costs. In 1992, Amtrak adopted a policy whereby no new 403(b) service will be initiated unless a state purchases and provides the cars and locomotives. Few states have been willing to make this type of investment. As part of the new business strategy adopted in December 1994, Amtrak will seek to gradually eliminate the “deeply discounted” service provided to the states under section 403(b). To accomplish this goal, it plans to renegotiate the reimbursement terms of all 403(b) service over the next several years so that the participating states subsidize all costs not covered by revenues. Doing so may require changes to the Rail Passenger Service Act, according to Amtrak, but the corporation has not yet decided what legislative changes it may seek. Many supporters of intercity rail passenger service believe that more people would ride the trains and that routes could be more profitable if high-speed services were introduced. Amtrak continues to work on the Northeast Corridor Improvement Program (NECIP) and plans to extend high-speed service from New York to Boston. Amtrak still requires about $1.5 billion to complete this modernization. High-speed trains have been proposed for other corridors around the nation, but attempts to build them have encountered serious obstacles. Significant capital costs will be incurred for improvements to the infrastructure and for new equipment regardless of where these systems are built. The High-Speed Ground Transportation Development Act of 1994 authorizes $169 million to assist in planning these systems, but no funds have been appropriated. Moreover, other questions, such as who will operate these systems and who will pay for them, remain unanswered. Extending electrification from New Haven, Connecticut, to Boston under NECIP will allow train speeds of up to 150 miles per hour, will cut travel times between New York and Boston from 4 hours to under 3 hours, and should generate increases in both ridership and revenues. Amtrak estimates that annual revenues from this project will exceed long-term avoidable costs by about $36 million (in 1993 dollars) in the year 2010. Although this amount will not cover capital costs, any revenues received in excess of long-term avoidable costs will help reduce Amtrak’s need for federal operating subsidies. These increased revenues are predicated on Amtrak’s capturing about 45 percent of the rail/air travel market between New York and Boston, a share equal to that currently held by Amtrak between Washington, D.C., and New York, where trains achieve speeds of 125 miles per hour. As of September 30, 1994, Amtrak estimated that the major improvements in infrastructure required to provide 3-hour service between New York and Boston would be completed in 1999. Since 1976, about $3.3 billion (in current-year dollars) has been appropriated for NECIP. Amtrak estimates that about $1.5 billion more is needed to complete the project. FRA estimates that an additional $582 million (in constant 1993 dollars) will be needed to address problems with track capacity. Counting prior expenditures on the Washington-New York segment, the corridor will cost about $5 billion. The track between New York and Boston is currently shared by Amtrak and freight and commuter railroads—all of which plan to increase their use of the track in future years. The existing track cannot accommodate all of these plans. Therefore, either some train operations will have to be shifted to off-peak hours or additional capacity will have to be constructed to ensure that all parties’ needs are met. Since Amtrak does not own about 95 miles of the rights-of-way between New York and Boston, it may have difficulty negotiating shifts in train schedules and/or costs to gain additional capacity. Consequently, Amtrak may either have to absorb more of the additional costs than it expects or delay its planned increases in train schedules until capacity problems can be resolved. Either action could significantly affect Amtrak’s ability to achieve timely revenue gains as a result of NECIP. Providing high-speed rail service outside the Northeast Corridor will be influenced by the recently enacted High-Speed Ground Transportation Development Act of 1994. This legislation authorizes $169 million over a 3-year period through fiscal year 1997 for planning assistance for high-speed rail corridors. To date, no funds have been appropriated and no construction funds have been authorized. A number of important questions have yet to be answered, including who will be designated to operate future high-speed rail service. There is no guarantee that Amtrak will be selected to operate these services. Nor is it clear how much federal assistance might be provided to build high-speed rail systems. The responsibility for funding these projects is likely to fall on the states, localities, or private investors. The impact of high-speed rail systems on Amtrak’s need for federal subsidies is also unclear. If Amtrak’s role is limited to operating such systems under contract to others and federal operating subsidies are limited, Amtrak and the federal government could largely be shielded from losing money on these operations. If high-speed rail service is operated as part of Amtrak’s national intercity route system, then federal operating subsidies will rise or fall depending on whether revenues exceed long-term avoidable costs. Any federal capital assistance would presumably be provided through separate appropriations. Amtrak estimates that nine high-speed train sets, at a total cost of about $170 million, would be needed to provide nine daily round-trips on a 300-mile corridor. Amtrak also earns revenues by operating commuter rail services under contract to state and local governments and regional transit agencies, developing its real estate holdings, and providing mail and express service. Amtrak was unable to provide us with an accounting of the costs associated with these activities until December 1994; therefore, we have not assessed their profitability. In fiscal year 1993, these activities accounted for about $435 million—or about 30 percent—of Amtrak’s $1.4 billion in revenues. In general, ancillary activities have been a growth area for Amtrak and, overall, appear to have produced a modest profit. In particular, commuter rail services, which generated revenues of about $270 million in fiscal year 1994, were Amtrak’s second largest source of operating revenue. Amtrak operates eight commuter rail systems in five states under competitively awarded cost-plus contracts. Amtrak primarily operates these services and does not use its own equipment. Since 1983, when Amtrak entered into its first contract with the Maryland Rail Commuter System, its commuter revenues and ridership have grown as Amtrak has entered into new contracts. (See figs. 4.3 and 4.4.) Amtrak began operating two major new systems in 1993—the Virginia Railway Express in northern Virginia and Metrolink in Los Angeles—to boost its commuter ridership from 20 million in 1992 to 29 million in 1993. Amtrak now carries more passengers on its commuter services than on its intercity operations. The financial contribution from Amtrak’s ancillary activities is largely unknown. During our review, Amtrak had difficulties in identifying the costs of these activities and, for the most part, was unable to provide us with financial statements for them in a timely manner. This was particularly true for commuter rail activities. Data on revenues and ridership were available but data on expenses were not. We also had difficulty identifying general and administrative expenses for these activities and the way these costs are allocated to specific lines of business or specific contracts. According to Amtrak, these costs are not accounted for separately but are instead allocated according to standard corporatewide formulas. We did not audit these formulas. As a result, it is not clear what the financial contribution of Amtrak’s ancillary activities might be, nor whether costs are being allocated correctly. Since 1990, Amtrak’s ridership has been relatively stagnant, revenues have declined, and the quality of service has deteriorated. Amtrak’s problems can be attributed partly to uncertain economic conditions and to competition from the airlines and from buses. But they are also the result of continued reliance on old, unattractive equipment that is prone to breakdowns and delays. Lacking the resources to purchase the new equipment that would increase the quality of service and constrained to match operating costs with federal subsidies, Amtrak has been forced to cut costs, delay maintenance of equipment, and generally let the quality and attractiveness of train travel deteriorate further. The prospects for recovery of ridership and revenues are poor. Increasing the amount of compensation the states pay for services provided under section 403(b), contracting to provide more commuter rail operation, and introducing high-speed trains outside the Northeast Corridor could all enhance revenues, but none of these initiatives can be expected to close the current deficit. In chapter 5, we make recommendations to Amtrak to determine the profitability of its ancillary activities and to provide further information to the Congress on the corporation’s potential involvement in high-speed service outside the Northeast Corridor. Chapter 5 also discusses alternatives for matching the country’s needs for viable intercity rail passenger service with the realities of limited federal resources. Amtrak and the federal government need to make important decisions about the future of intercity passenger rail service and the government’s commitment to subsidize such operations. Amtrak’s condition will only get worse if it continues to operate the current system—even with the reductions in service planned for 1995—at the current level of state and federal funding. High quality nationwide passenger service of the present scope that might attract and retain passengers would require substantially higher levels of additional support, particularly for capital investment. The key question is whether the federal government or the individual state governments are willing to make the required investments, given the competing demands on their resources. A substantially reduced passenger rail system, while more feasible from a fiscal viewpoint, requires that difficult decisions be made on the type, quality, and location of the remaining services. We do not believe that the current situation—the present nationwide passenger rail system at the current subsidy level—represents a viable option. Amtrak’s financial condition will continue to deteriorate, and the railroad’s ability to provide nationwide service at the present level will be seriously threatened. To maintain the current nationwide system will require significantly increased resources if Amtrak is to offer quality service. Without additional funds from federal, state, and local governments, Amtrak will have to cut expenses significantly by eliminating some routes and reducing the frequency of service on others. In either case, ridership could fall as the level of service declines. In September 1994, Amtrak officials announced their decision to reduce Amtrak’s management force from 2,700 to 2,100 people by the end of 1994 through voluntary or forced separation. The staffing cut is part of a plan to shift control over pricing, marketing, and service to local offices and train crews by reducing headquarters staff in Washington, D.C., and transferring staff to three regional operating centers. Amtrak hopes to improve efficiency by establishing operating centers in Philadelphia for the Northeast Corridor, Los Angeles for West Coast service, and Chicago for the rest of the nation. These changes and reductions in staff, however, involve too few dollars to substantially affect Amtrak’s financial condition. After severance costs, Amtrak expects these cuts to save approximately $30 million in fiscal year 1995. Realizing that additional actions needed to be taken, in December 1994 Amtrak announced plans to cut expenses by reducing service, as we discussed in chapter 2. If implemented, these actions could bring Amtrak’s operating costs in line with the revenues and grants for 1995. Nonetheless, these planned actions will not solve the corporation’s longer-term problems. Revenues will continue to fall short of expenses on most routes, and Amtrak estimates that operating expenses will exceed operating revenues and the federal operating subsidy by $1.3 billion between 1996 and the year 2000. Furthermore, Amtrak will still need over $4 billion to replace worn-out equipment and infrastructure. Achieving about 26 percent of the $364 million in annual net savings that Amtrak anticipates from these actions might require collective bargaining and/or legislative change, according to Amtrak. Also, in eliminating routes, Amtrak will incur labor protection expenses to compensate workers who lose their jobs or are placed in lower-pay positions. Amtrak estimates that labor protection costs due to the proposed changes could be between $80 million and $158 million. Amtrak has identified additional legislative changes that it believes could improve the corporation’s long-term viability. These changes include including Amtrak in a federal transportation trust fund; limiting Amtrak’s provision of state-assisted rail passenger service to situations in which the state is willing to provide the actual cost of such service; • amending section 405 of the Rail Passenger Service Act to permit negotiations on labor protection issues without the statutory rigidity that currently limits those negotiations; further amending section 405 to remove constraints on contracting for work; • eliminating from Amtrak’s budget the mandatory payments for railroad retirement and unemployment benefits incurred by non-Amtrak employees; • eliminating Amtrak’s obligation to pay federal fuel taxes; • requiring that Amtrak’s federal operating grant be provided on the first day of the fiscal year to reduce overall cash flow costs and requirements; limiting punitive damages assessed against Amtrak through tort reform; • providing Amtrak with the authority to issue tax-exempt debt; • providing tax incentives to freight railroads for the revenues they earn from on-time performance payments from Amtrak; and • clarifying Amtrak’s exemption from local regulations on permits for work to improve the Northeast Corridor. At the time of our review, Amtrak officials had not determined the extent to which these proposed changes to legislation would reduce the corporation’s expenses, but they plan to do so. In addition, Amtrak has not identified the costs and benefits associated with these changes, including the impact of tax reductions or credits on the national debt and the impacts on other affected parties. Nor have we assessed the impact of these actions on Amtrak or other affected parties. Increased funding, especially capital investment, is essential if Amtrak is to continue its nationwide service at the present level. Over the longer term, increased funding offers Amtrak the potential to increase revenues and improve service quality on its existing routes, increase the efficiency and productivity of its operations, and, possibly, introduce high-speed service on some routes. To substantially improve its deteriorating financial condition, Amtrak must increase its passenger revenues, which have been declining in real terms since 1990, as noted in chapter 4. By investing in new passenger cars and locomotives, Amtrak could increase its seating capacity on the routes where the demand is greatest. As we discussed in chapter 3, Amtrak recently purchased 245 passenger cars and 72 locomotives for nearly $1 billion. If the Congress more than doubled Amtrak’s current capital subsidy to $500 million annually from fiscal years 1995 to 2000—which may be difficult within the current federal budget—Amtrak could improve its maintenance facilities, stations, and information systems, among other things. However, even with the gains in efficiency and ridership expected from these improvements, Amtrak would still need more than $400 million in annual operating subsidy from state or federal governments through the year 2000. (See table 5.1.) The Congress could redefine Amtrak’s role and mission by restructuring and realigning Amtrak’s route system to a smaller basic network that would continue to be eligible for federal funding. This basic network could be augmented by regional routes fully funded by states. This approach has the potential to improve Amtrak’s financial condition and service quality. Additionally, it may lead to reductions in federal funding over the longer term, although capital subsidies will continue to be needed. An analysis of Amtrak’s current market—the routes with the largest ridership and revenues—could be used as a starting point in determining a basic route network that would be eligible for federal funding. We believe that the specific routes included in this restructured system could be determined by Amtrak or by a temporary commission such as the one formed to evaluate military base closures. A basic route network could be defined by determining where Amtrak’s market is currently strongest. As a starting point, we identified the (1) routes with the largest ridership and (2) routes that generated the greatest revenues. We found that 12 of Amtrak’s 44 routes accounted for 70 percent of Amtrak’s 22 million riders in fiscal year 1993. The top five routes—all in the Northeast and southern California—each had over 1 million riders and accounted for 56 percent of all riders. (See fig. 5.1.) The routes with the most riders roughly coincided with the routes generating most of the passenger revenues. Eleven routes (each with revenues greater than $30 million in fiscal year 1993) earned 68 percent of Amtrak’s passenger revenues in fiscal year 1993 and accounted for 61 percent of the fully allocated costs. The top five routes generated 47 percent of the total train revenues and accounted for 38 percent of the costs. The other six routes extend service on the East Coast, provide a third route from Chicago to the West Coast, and serve the West Coast.(See fig. 5.2.) Coast-to-coast service could be maintained if the Chicago-New York City/Boston route—the next largest in revenues—is added to the network. Generally, however, the data on ridership and revenues indicate that there is not a strong demand for passenger rail service between the East Coast and Chicago, largely because rail service cannot compete in time with air travel. The daily train leaving Washington, D.C., at 4:40 p.m. arrives in Chicago at 9:10 a.m. the next day—a trip of over 16 hours. By comparison, a flight between Washington, D.C., and Chicago takes about 2 hours. The criteria for identifying the routes that could constitute a new basic system needs to balance the requirement that the routes be well-patronized with the costs associated with providing the service. Our analysis provides a preliminary look at a basic route network. Some of the routes with the most riders and revenues, however, also have the largest losses. Further analysis of the revenues and costs associated with routes is needed to determine (1) what revenues could potentially be lost as a result of losing passengers who are now connecting from routes that would be eliminated and (2) whether entire routes or only segments generate high revenues and ridership. In addition, the direct costs for each route, which are currently allocated using a formula, need to be measured more precisely and directly allocated. Amtrak compiles boarding and destination information on its passengers from tickets collected on the trains. These data could be analyzed to determine the potential effect on revenues and ridership of the loss of passengers now connecting from other routes. Such an analysis would reveal, for example, the number of passengers on the Chicago-San Francisco route who begin their trips at stations that could potentially be eliminated. Additional information would need to be gathered on transportation alternatives for interconnecting passengers and the number who may still ride Amtrak after taking a car or bus to the nearest alternative station. The data from tickets could also be analyzed to determine revenues and ridership on segments of routes. For example, most of the revenues and ridership on the Chicago-Seattle route might be commuter traffic between Chicago and Milwaukee and/or intrastate travel between Spokane and Seattle. Until mid-December 1994, Amtrak did not know precisely the direct costs associated with its individual routes. Even costs that could be directly measured, such as fuel and labor, are computed and allocated to routes on the basis of formulas developed by Amtrak. Such formulas are a reasonable way to account for expenses on a systemwide basis. However, this method may provide insufficient information to allow management to make judgments on the relative performance of individual routes. For example, this method does not consider operating conditions that may be specific to routes, such as the costs of turning a train around at the terminal stations. These costs are allocated to all routes, although turning trains around is not necessary on some routes. In addition, consideration must be given to the sensitivity of financial performance on a given route to the age and type of equipment assigned to it. Finally, Amtrak currently allocates costs to routes differently depending on whether the route is considered “basic” or “incremental” service. For example, Amtrak designates the route between Washington, D.C., and New York City as basic service and allocates a portion of all operating costs to the route. In contrast, the Philadelphia-New York City route is designated as incremental service, and only the costs of adding cars to trains on the basic Washington-New York City route are allocated to it. In addition, Amtrak does not allocate the cost of locomotives to incremental routes. Amtrak officials have recognized these problems with their cost information and acknowledge that this method of allocating costs may be making some routes appear much more unprofitable than they really are. During the fall of 1994, Amtrak undertook an in-depth analysis of the costs and revenues of individual routes in developing its December 1994 strategic business plan. As part of its new business plan, Amtrak intends to measure more direct costs and allocate them to routes rather than allocating all costs according to formulas. Specific, alternative route structures and expected levels of federal support could be developed and recommended to the Congress by Amtrak or a temporary commission established by the Congress for that purpose. The commission or Amtrak could identify several alternative networks, depending on the level of funding available. A commission to restructure passenger rail could operate in a manner similar to the Department of Defense’s (DOD) Base Realignment and Closure Commission. After the Congress identifies specific future goals and objectives, the commission could obtain route-specific data from Amtrak on ridership, revenues, and costs and define basic route networks commensurate with the different funding levels. After identifying a potential basic network, specific route structures could be assessed by considering (1) all the fixed costs that could be reduced or increased if a large number of routes were eliminated and (2) the increased revenues that could be gained by redeploying equipment to routes where the demand would support increased service. Additional factors that the commission or Amtrak could assess include the availability of alternative intercity transportation and the impact of the proposed network on energy consumption, pollution, and traffic congestion. However, if these nonfinancial factors are used to determine the route structure, it will be difficult to make improvements in service quality and Amtrak’s financial condition without higher federal subsidies. After the Congress agrees on a basic rail service network that would be eligible for federal funding, individual states could be given the option of adding specific service to Amtrak’s basic network if they are willing to fully subsidize the added service. Some states have already entered into such agreements with Amtrak. For example, Washington State contracted with Amtrak to operate upgraded service between Seattle and Portland, Oregon. All seats on the train were reserved, the train included a dining car that served local specialties such as salmon, and the train featured on-board telephone service and video entertainment. By comparison, Amtrak’s regular service between Seattle and Portland offered only snack and beverage service. The Washington State Department of Transportation collected all revenues for this service and paid all costs. Nonetheless, a state’s flexibility in using federal funds for intercity passenger rail service is constrained under current law. Passenger rail service competes for limited transportation funds and, unlike aviation, highways, and mass transit, it does not have a federal trust fund. State and local governments have some flexibility to allocate their federal transportation funds among different modes, but not to intercity passenger rail operations. Even if it completes a major restructuring of its service, Amtrak will continue to need capital funds for equipment and infrastructure. In Europe and Japan, where competitive conditions are more conducive to rail travel, intercity passenger service requires substantial public support, including significant investments in the infrastructure. For example, France plans to invest nearly $25 billion in its railroad during the 1990s. This amount includes $6.8 billion for rolling stock, $5.3 billion for investments in infrastructure on high-speed lines, and $1.1 billion for other investments in infrastructure. Germany plans to invest over $70 billion in its main railway lines in the 1990s, including $28.8 billion for improvements in infrastructure, $18.5 billion in other upgrades, and $8.2 billion in equipment. Amtrak will continue to need federal grants to meet its capital needs. A 1991 study conducted for FRA evaluated four scenarios for Amtrak’s future—a system in the Northeast Corridor only, the Northeast Corridor plus a few routes connected to the corridor where losses are relatively small, Amtrak’s current route network, and the current network minus several routes where losses are the largest. All scenarios assumed that Amtrak would require a capital subsidy. This study further assumed that Amtrak would improve its efficiency in certain areas, such as its maintenance facilities. Even though some assumptions were overly optimistic in predicting improvements in Amtrak’s efficiency, this study estimated that by the year 2000, Amtrak would still require a federal operating subsidy under all scenarios. The smallest subsidy would be required for the current network minus the routes with the largest losses. The requirement to pay up to 6 years’ wages to employees who lose their position if a route is eliminated was a major factor in determining the cost under the alternative scenarios. If the Congress chooses to eliminate or greatly reduce federal subsidies for Amtrak, it could privatize the operation and make it subject to market forces. Amtrak’s route network would be reconfigured so that only those parts of the system that had the potential to cover their costs would continue to operate. While some rail passenger services conceivably could be taken over by the private sector, significant federal investment would be needed before any part of the current system could be privatized, and nationwide service as it exists today could not be offered. Privatizing Amtrak might be complicated by a number of factors. First, it is not clear what would be privatized. Amtrak owns very little track outside the Northeast Corridor. Also, most of the stations Amtrak serves are owned either by other railroads or by local governments. Even many of Amtrak’s passenger cars and locomotives are leased. Second, the term “privatize” is sometimes used to mean “defederalize”; that is, to shift the responsibility for subsidizing intercity train operations to state and local governments. Passenger train services might be inherently unprofitable. Therefore, private, for-profit firms are unlikely to be interested in such business without some government assistance. The private railroads in this country were unable to operate passenger service profitably, and throughout the world, intercity passenger trains are heavily subsidized. If the public benefits of intercity passenger train services are largely local or regional, these services might more appropriately be offered or supported by state or local governments. Third, even in the Northeast Corridor, where Amtrak controls significant assets, different degrees of privatization are possible. For example, programs to privatize freight rail service in the Netherlands envision privatizing the operations but not the infrastructure. The government will continue to provide capital to maintain and develop the track and other facilities. In addition, privatizing those parts of Amtrak that could potentially be profitable might still require substantial initial investment to create a saleable asset, similar to what occurred with Conrail. In that instance, many unprofitable routes were abandoned and, after substantial federal investment, a profitable core business was established. However, the analogy with Conrail may be limited because, on the basis of experience in the United States and throughout the world, we have found no evidence that intercity passenger rail operations can cover all costs and generate a return on investment. Therefore, even if privatized, Amtrak will continue to need federal or state funds to meet its capital needs. Even the Northeast Corridor would be difficult to sell to a private firm until the upgrade between New York and Boston is complete and significant repairs have been made to the segment between New York and Washington, D.C. Finally, privatizing Amtrak is not likely to result in successfully preserving a nationwide passenger rail system. Under this option, passenger rail service could be reduced to a few regional corridors because only a few well-traveled routes could ever generate sufficient revenues to cover the substantial operating costs. Amtrak and the federal government face a difficult set of choices. We believe that continuing the present course—the same funding level and basic route system, even with the proposed service cuts—is neither feasible nor realistic because Amtrak will continue to deteriorate. The Congress is confronted with numerous budget decisions that make substantial increases in Amtrak’s subsidy unlikely. If increases are not forthcoming from federal and state sources of funds, Amtrak’s viability may depend on restructuring operations to reduce the route network. Even so, Amtrak will continue to require government subsidies, especially to meet its capital needs. Amtrak and the federal, state, and local governments need to make important decisions about the quality and extent of intercity passenger rail service to be provided and the long-term funding of such an operation. First, the Congress needs to decide on the nation’s expectations for intercity passenger rail service and the scope of Amtrak’s mission to provide that service. These decisions require defining a national route network, along with determining the extent to which the federal government would fund operating losses and capital investments and the way any remaining deficits will be covered. We believe that Amtrak or a temporary commission could provide the Congress with specific options that would define a national route network consistent with the available funding. Finally, once the Congress decides on a national route network, Amtrak could develop and provide to the Congress a long-term financing and operating plan (5 to 10 years). This plan should provide realistic expectations for repairing and maintaining Amtrak’s fleet, replacing aging infrastructure, and meeting increases in expenses that can be reasonably anticipated. In light of Amtrak’s financial and operating problems, we recommend that the Congress consider whether Amtrak’s original mission of providing nationwide intercity passenger rail service at the present level is still appropriate. If the Congress decides to reassess the scope of Amtrak’s mission, it could direct Amtrak or a temporary commission, similar to the one established to close military bases, to make recommendations and offer options defining and realigning Amtrak’s basic route network so that efficient and quality service could be provided within the funding available from all sources. The Congress could then make the final decision on Amtrak’s future route network. To ensure that Amtrak accurately communicates its operating and financial conditions and its need for federal funds to the Congress, we recommend that the President of Amtrak • provide detailed information in federal grant requests on how revenues from intercity passenger service have been estimated; incorporate into federal grant requests dollar estimates of the costs of future accident and weather-related contingencies; • develop and present to the Congress a plan outlining the costs and benefits of participating in high-speed rail service outside the Northeast Corridor, including the impact on Amtrak’s annual grant request; • undertake a comprehensive review and/or audit of the costs associated with its commuter rail and other ancillary activities, identifying the costs associated with these activities, the way these costs are allocated to individual commuter rail contracts, and the overall profit or loss of each activity as well as assessing the appropriateness of any formulas used to allocate costs; and • provide the Congress with proposed legislative changes that could improve Amtrak’s long-term viability, along with estimates of the expected effect of each proposal on Amtrak’s finances and a discussion of the other parties that will likely be affected and to what extent. Amtrak said that our draft report accurately presented the railroad’s financial and operating status and correctly portrayed the corporation’s capital investment problems. However, Amtrak had four principal points. First, Amtrak stated that the draft report understated the impact of actions adopted by its Board of Directors in December 1994 and said the board had, in effect, implemented our recommendation to redefine and reduce the route system consistent with the funding available. We revised our report to specifically highlight Amtrak’s proposed operating changes, including plans to reduce staffing, reduce the frequency of service on some routes, and eliminate service on others. In addition, Amtrak acknowledges that even with the planned service reductions, its operating deficits will exceed operating subsidies by $1.3 billion through the end of the century. While we believe that the new business plan is an important first step, it does not implement our recommendation, nor does it resolve Amtrak’s financial, capital, and service quality problems. Unless sufficient funds are available to support Amtrak’s current operations and provide the necessary capital, we recommend that the Congress reassess Amtrak’s scope of operation and mission and direct either Amtrak or a temporary commission to provide the Congress with options for a route network that is consistent with the level of funding available. Not only do the service reductions announced in the board’s plan still leave a large gap between the deficit and the subsidies from the federal and state governments after 1996, but about a quarter of the planned savings will require negotiations with organized labor and/or legislative changes. Also, Amtrak’s plan does not resolve how the corporation will meet its capital needs, now totaling about $2.5 billion in the Northeast Corridor alone. Although the board set a goal that could ultimately eliminate the federal operating grant by 2002, this was made contingent on Amtrak’s receiving (1) “sufficient capital funding to achieve a good-state-of-repair,” (2) the current level of operating grants until 2002, and (3) increased funding from the states to cover operating deficits for service they receive. These are significant assumptions about funding that will require congressional endorsement as part of the process of defining options for Amtrak’s future route network. Second, Amtrak expressed concern about the timing of our recommendation for a commission because the board’s recent analysis showed that reducing service frequency was more economical than closing routes and because Amtrak is a commercially driven corporation, which should not act like the Department of Defense in making decisions about closures. Amtrak also states that little more can be done beyond the board’s recommendation, short of closing down the national system. Amtrak’s point on timing is well taken if the railroad’s requirements for capital and operating funds for the national system, as presently constituted, are met. If they are not, cutbacks in service frequency and routes beyond those announced by the board will be required. In addition, much of the expected savings comes from reductions in frequency on long-distance routes. Amtrak believes that revenue losses will be minimal because most long-distance train riders are discretionary travelers who are not time-sensitive and, therefore, will adjust their travel plans to accommodate Amtrak’s reduced service level. Unfortunately, Amtrak lacks experience on the effect on ridership of reducing the frequency of long-distance service. Amtrak’s estimates of significant net savings assume that ridership and revenue losses will be minimal. This is an important assumption. With respect to Amtrak’s view that the economics of intercity rail passenger service preclude cutbacks beyond those already planned unless the goal of providing a national system is abandoned, we believe that Amtrak must first define what it means by a national system. If it means a cross-country network of interconnected routes, then it may not be possible to support such a system with the available funds. However, there may be numerous routes, either densely traveled corridors or segments of existing long-distance routes, on which service could be continued with these funds. In addition, with appropriate congressional approval, Amtrak might enter into partnerships with individual states or groups of states to “reconnect” the federally supported parts of the system. As for Amtrak’s reservations about having a commission offer options to the Congress, we recognize that Amtrak could provide the Congress with options for realigning and closing routes, and we explicitly provide for that option in our recommendation. We recognize that Amtrak has superior knowledge of the economics of its operation, and Amtrak’s recent analytical efforts could provide the starting point for considerations about restructuring the system. However, we also realize the difficulties inherent in deciding which states and locations should receive service. Because the commission would be independent, it could help eliminate some of the problems normally associated with reducing service in different areas of the nation. In addition, if the Congress chooses, such a commission could take into account factors beyond revenues and costs, such as highway and airport congestion relief, in deciding how to realign the route network. Third, Amtrak believed that the role of the states in intercity rail passenger service and the need for access to federal transportation funds deserved emphasis in our report. We revised the report to reflect the fact that unlike aviation, highways, and mass transit, intercity passenger rail has no trust fund and that the ability of the states to use the trust funds that exist for other modes for rail is extremely limited. In 1991, the Intermodal Surface Transportation Efficiency Act provided the states with some flexibility to use highway dollars for mass transit, but it did not authorize the direct use of such funds for intercity passenger rail service. Although the states’ access to federal transportation trust funds is a key element of Amtrak’s plan to have the states cover significantly more of the railroad’s costs, it is a policy judgment for the Congress to make whether and to what extent such flexibility should be extended to intercity passenger rail. Amtrak also wanted assurance that we consider the federal funds for Amtrak’s capital needs as an investment and not a subsidy. Our report clearly notes that capital expenditures are an investment. Finally, Amtrak believes that if it is freed from certain legislative restraints, it could operate more as a competitive commercial entity. Amtrak envisions changes to labor laws to give it greater latitude to negotiate on such matters as severance pay, contracting, and work processes. Amtrak also wants (1) an exemption from requirements to pay federal fuel taxes, (2) authority to issue tax-exempt debt, (3) inclusion in a federal transportation trust fund, (4) relief from requirements under the Railroad Retirement Act, and (5) limits on its liability for punitive damages after accidents. We added a recommendation to our report that Amtrak provide its proposals to the Congress, along with the estimated effect of each proposal on Amtrak’s finances and on other affected parties. This information will provide a vehicle for congressional deliberation on the merits of each proposal and allow for consideration of opposing views. Amtrak also provided comments that clarified certain technical information or statements made in a draft of this report. We found Amtrak’s suggestions useful and incorporated these changes in the report where appropriate. Amtrak’s written comments and legislative proposals are presented in full in appendix IV.
|
Pursuant to a legislative requirement, GAO reviewed Amtrak's financial and operating condition, focusing on: (1) whether Amtrak can overcome its financial and operating problems; and (2) alternative actions Amtrak could take to meet its future funding requirements. GAO found that: (1) Amtrak's financial condition has declined steadily since 1990 and its ability to provide nationwide service is seriously threatened; (2) although Amtrak's funding has increased to almost $1 billion in 1995, the increase has not been sufficient to cover its operating deficiencies and capital investment, equipment, and facility improvement requirements; (3) although Amtrak has assumed debt, deferred maintenance, and reduced staffing to address its capital shortfall, these actions have diminished the quality and reliability of Amtrak's service; (4) it is unlikely that Amtrak can overcome its financing, capital investment, and service quality problems without significant increases in passenger revenues or funding; (5) Amtrak revenues have suffered from an unfavorable operating environment and intense fare competition from airlines; (6) Amtrak faces additional equipment and facility maintenance costs and must negotiate new agreements with freight railroads to access their track; (7) Amtrak could substantially increase its funding by making capital investments and improving service quality to retain current riders and attract new ones, but this approach would be costly and difficult to achieve in the current budget environment; (8) the privatization of Amtrak is not feasible because few private firms would be willing to assume the risks of providing intercity passenger service; and (9) Amtrak could realign or reduce its current route system and retain service in the locations where it could cost-effectively carry the largest number of passengers.
|
With the passage of the Aviation and Transportation Security Act (ATSA) in November 2001, TSA assumed from the Federal Aviation Administration (FAA) the majority of the responsibility for civil aviation security, including the commercial aviation system. ATSA required that TSA screen 100 percent of checked baggage using explosive detection systems by December 31, 2002. As it became apparent that certain airports would not meet the December 2002 deadline, the Homeland Security Act of 2002 in effect extended the deadline to December 31, 2003, for noncompliant airports. Under ATSA, TSA is responsible for the procurement, installation, and maintenance of explosive detection systems used to screen checked baggage for explosives. Airport operators and air carriers continued to be responsible for processing and transporting passengers’ checked baggage from the check-in counter to the airplane. Explosive detection systems include EDS and ETD machines (fig. 1). EDS uses computer-aided tomography X-rays adapted from the medical field to automatically recognize the characteristic signatures of threat explosives. By taking the equivalent of hundreds of X-ray pictures of a bag from different angles, EDS examines the objects inside of the baggage to identify characteristic signatures of threat explosives. TSA has certified, procured, and deployed EDS manufactured by three companies—L-3 Communications Security and Detection Systems (L-3); General Electric InVision, Inc. (GE InVision); and Reveal Imaging Technologies, Inc. (Reveal). In general, EDS is used for checked baggage screening. ETD machines work by detecting vapors and residues of explosives. Human operators collect samples by rubbing bags with swabs, which are then chemically analyzed in the ETD machine to identify any traces of explosive materials. ETD machines are used for both checked baggage and passenger carry-on baggage screening. TSA has certified, procured, and deployed ETD machines from three manufacturers, Thermo Electron Corporation, Smiths Detection, and General Electric Company. TSA’s EDS and ETD maintenance contracts provide for preventative and corrective maintenance. Preventative maintenance includes scheduled activities, such as changing filters or cleaning brushes, to increase machine reliability and are performed monthly, quarterly, or yearly based on the contractors’ maintenance schedules. Corrective maintenance includes actions performed to restore machines to operating condition after failure, such as repairing the conveyer belt mechanism after a bag jams the machine. TSA is responsible for EDS and ETD maintenance costs after warranties on the machines expire. From June 2002 through March 2005, Boeing was the prime contractor primarily for the installation and maintenance of EDS and ETD machines at over 400 U.S. airports. TSA officials stated that the Boeing contract was awarded at a time when TSA was a new agency with many demands and extremely tight schedules for meeting numerous congressional mandates related to passenger and checked baggage screening. The cost reimbursement contract entered into with Boeing had been competitively bid and contained renewable options through 2007. Boeing subcontracted for EDS maintenance through firm-fixed-price contracts with the original EDS manufacturers, GE InVision and L-3, which performed the maintenance on their respective EDS. Boeing subcontracted for ETD maintenance through a firm-fixed-price contract with Siemens. Consistent with language in the fiscal year 2005 House Appropriations Committee report and due to TSA’s acknowledgment of Boeing’s failure to control costs, TSA received DHS authorization to negotiate new EDS and ETD maintenance contracts in January 2005. In March 2005, TSA signed firm-fixed-price contracts for EDS and ETD maintenance. TSA awarded a competitively bid contract to Siemens to provide maintenance for ETD machines. According to TSA, it negotiated sole source contracts with L-3 and GE InVision for maintaining their respective EDS because they are the original equipment manufacturers and owners of the intellectual property rights of their respective EDS. In September 2005, TSA awarded a competitively bid firm-fixed-price contract to Reveal for both the procurement and maintenance of a reduced size EDS. TSA obligated almost $470 million from fiscal year 2002 through fiscal year 2005 for EDS and ETD maintenance, according to TSA budget documents. In fiscal year 2006, TSA estimates it will spend $199 million and has projected it will spend $234 million in fiscal year 2007. According to TSA officials, in fiscal year 2004, TSA requested and received approval to reprogram about $32 million from another account to EDS/ETD maintenance due to higher levels of maintenance costs than expected. Similarly, in fiscal year 2005, TSA requested and received approval to reprogram $25 million to fund the L-3 contract and to close out the Boeing contract. TSA was not able to provide us with data on the maintenance cost per machine before fiscal year 2005 because, according to TSA officials, TSA’s previous contract with Boeing to maintain EDS and ETD machines was not structured to capture these data. Table 1 identifies the maintenance costs by type of EDS and ETD machine for fiscal years 2005 and 2006. TSA did not provide us with projections of EDS and ETD maintenance costs beyond 2007. TSA officials told us that future costs will be influenced by the number, type, quantity, and locations of machines necessary to support system configurations at airports, such as the extent to which EDS are integrated with airport baggage conveyor systems or are operated in stand-alone modes. Further, TSA officials told us that future EDS and ETD maintenance costs are dependent on decisions related to the deployment of new technologies and the refurbishment of existing equipment, among other things. The current contracts would have negotiated maintenance prices per machine through March 2009, if TSA decides to exercise option years in the contracts. We identified different factors that have played a role in costs to date and that will influence future maintenance costs for EDS and ETD machines. According to a September 2004 DHS OIG report, TSA did not follow sound contracting practices in administering the Boeing contract, which was primarily for the installation and maintenance of EDS and ETD machines. According to DHS OIG officials, TSA’s failure to control costs under the Boeing contract, including the lack of sound contracting practices, contributed to increases in maintenance costs. Among other things, the DHS OIG report stated that TSA had paid provisional award fees totaling $44 million through December 2003 without any evaluation of Boeing’s performance. In response to the DHS OIG, TSA agreed to recover any excessive award fees paid to Boeing, if TSA determined that such fees were not warranted. In commenting on our draft report in July 2006, DHS stated that TSA has conducted a contract reconciliation process to ensure that no fees would be paid on costs that exceeded the target due to poor contractor performance. Further, DHS stated that TSA and Boeing had reached an agreement in principle on this matter and that the documentation was in the approval process with closure anticipated in July 2006. In its report accompanying the DHS Appropriations Bill for fiscal year 2007, the House Appropriations Committee stated its need for a report from TSA on any actions it has taken to collect excessive award fees, how much of the fees have been received to date, and specific plans to obligate these collections and cited TSA’s plans to use any cost recoveries to purchase and install additional EDS. These actions were based on the committee’s long-standing concerns about the increasing costs for EDS and ETD maintenance. In addition to matters related to the Boeing contract, TSA officials stated that another factor contributing to cost increases were the larger than expected number of machines that came out of warranty and their related maintenance costs. According to TSA officials, they were not able to determine the cost impact of these additional machines because the Boeing contract was not structured to provide maintenance costs for individual machines. With regard to future EDS and ETD maintenance costs under firm-fixed- price contracts, maintenance costs per machine will increase primarily by an annual escalation factor in the contracts that takes into account the employment cost index and the consumer price index, if TSA decides to exercise contract options. In addition, future maintenance costs may be affected by a range of factors, including the number of machines deployed and out of warranty, conditions under which machines operate, contractor performance requirements, the emergence of new technologies or improved equipment, and alternative screening strategies. Lastly, life-cycle cost estimates were not developed for the Boeing, Siemens, L-3, and GE InVision contracts before the maintenance contracts were executed, and, as a result, TSA did not have a sound estimate of maintenance costs for all the years the machines are expected to be in operation. In August 2005, TSA hired a contractor to define parameters for a life-cycle cost model, among other things. This contract states that TSA and the contractor will work together to ensure that the full scope of work is planned, coordinated, and executed according to approved schedules. In commenting on our draft report in July 2006, DHS stated that the TSA contractor estimated completing a prototype life-cycle cost model by September 2006. Further, DHS stated that TSA’s evaluation of the prototype would begin immediately upon delivery and that full implementation of an EDS life-cycle cost model would be completed within 12 months after the prototype had been approved. According to a TSA official, the life-cycle cost model would be useful in determining machine reliability and maintainability and to inform future contract negotiations, such as when to replace a machine versus continuing to repair it. We identified several actions TSA has taken to control EDS and ETD maintenance costs. First, TSA entered into firm-fixed-price contracts starting in March 2005 with maintenance contractors, which offer TSA certain advantages over cost reimbursement contracts because price certainty is guaranteed for up to 5 years if TSA exercises options to 2009. Also, TSA included several performance requirements in the Siemens, L-3, GE InVision, and Reveal contracts, including the collection of metrics related to machine reliability, maintainability, and availability and required specific cost data related to maintenance and repair. TSA officials told us that these data will assist them in monitoring the contractor performance as well as informing future contract negotiations for equipment and maintenance. These contracts also stipulate that maintenance contractors meet monthly with TSA to review all pertinent technical schedules and cost aspects of contracts. TSA also incorporated provisions in the L-3 and GE InVision contracts to specify that the agreed price for maintaining EDS would be paid only if the contractor performs within specified mean downtime (MDT) requirements. Contractors submit monthly invoices for 95 percent of the negotiated contract price for the month and then submit a MDT report to justify the additional 5 percent. Consequently, if the contractor fails to fulfill the MDT requirements, it is penalized 5 percent of the negotiated monthly maintenance price. As of February 2006, neither GE InVision nor L-3 had been penalized for missing their MDT requirements. The allowable MDT is lowered from 2005 to subsequent renewable years in the contract, as shown in table 2. With regard to TSA’s oversight of EDS and ETD contractor performance, TSA’s acquisition policies and GAO standards for internal controls call for documenting transactions and other significant events, such as monitoring contractor activities. The failure of TSA to develop internal controls and performance measures has been recognized by other GAO and DHS OIG reviews. TSA has policies and procedures for monitoring its contracts and has included contractor performance requirements in the current EDS and ETD maintenance contracts. However, TSA officials provided no evidence that they are reviewing maintenance cost data provided by the contractor because they are not required to document such activities. For example, even though TSA officials told us that they are reviewing required contractor data, including actual maintenance costs related to labor hours and costs associated with replacing and shipping machine parts, they did not have any documentation to support this. TSA officials told us that they have begun to capture these data to assist them in any future contract negotiations. Further, TSA officials provided no evidence that performance data for corrective and preventative maintenance required under contracts are being reviewed. TSA officials told us that they perform such reviews, but do not document their activities since there are no TSA policies or procedures requiring them to do so. Therefore, TSA could not provide assurance that contractors are complying with contract performance requirements. For example, although TSA documents monthly meetings with contractors to discuss performance data, TSA officials did not provide evidence that they independently determine the reliability and validity of data required by the contracts, such as mean time between failures and mean time to repair, which are important to making informed decisions about future purchases of EDS and ETD equipment and their associated maintenance costs. Further, TSA officials provided no evidence that they ensure that contractors are performing scheduled preventative maintenance. TSA officials told us that they review the contractor- submitted data to determine whether contractors are fulfilling their contractual obligations, but do not document their activities because there are no TSA policies or procedures to require such documentation. Additionally, for EDS contracts with possible financial penalties, TSA officials told us that they review contractor-submitted mean downtime data on a monthly basis to determine the reliability and validity of the data and to determine whether contractors are meeting contract provisions or should be penalized. However, TSA officials do not document these activities because there are no TSA policies or procedures requiring them to do so. As a result, without adequate documentation, there is no assurance as to whether or not contractors are meeting contract provisions or that TSA has ensured that it is making appropriate payments for services provided. The cost of maintaining checked baggage-screening equipment has increased as more EDS and ETD machines have been deployed and warranties expire. TSA’s move in March 2005 to firm-fixed-price maintenance contracts for EDS and ETD maintenance was advantageous to the government in that it helps control present and future maintenance costs. Firm-fixed-price contracts also help ensure price certainty and therefore are more predictable. However, unresolved issues remain with the past contractor, specifically fees awarded to former contractor Boeing that may have been excessive due to a lack of timely evaluation of the contractor’s performance. The House Appropriations Committee has expressed concern about these unresolved issues; specifically, what actions TSA has taken to recover these excessive fees, and the extent to which any collections might impact future TSA obligations. Closing out the Boeing contract is essential to resolving these issues. In responding to our draft report, DHS stated that the completion of an EDS life-cycle cost is over a year away. Absent such a life-cycle cost model, TSA may not be identifying cost efficiencies and making informed procurement decisions regarding the future purchase of EDS and ETD machines and maintenance contracts. Further, TSA must provide evidence of its reviews and analyses of contractor-submitted data and perform analyses of contractor data to determine the reliability and validity of the data and to provide assurance of compliance with contract performance requirements and internal control standards. Without stronger oversight, TSA will not have reasonable assurance that contractors are performing as required and that full payment is justified based on meeting mean downtime requirements. To help improve TSA’s management of EDS and ETD maintenance costs and strengthen oversight of contract performance, we recommend that the Secretary of Homeland Security instruct the Assistant Secretary, Transportation Security Administration, to take the following three actions: establish a timeline to complete its evaluation and close out the Boeing contract and report to congressional appropriations committees on its actions, including any necessary analysis, to address the Department of Homeland Security Office of Inspector General’s recommendation to recover any excessive fees awarded to Boeing Service Company; establish a timeline for completing life-cycle cost models for EDS, which TSA recently began; and revise policies and procedures to require documentation of the monitoring of EDS and ETD maintenance contracts to provide reasonable assurance that contractor maintenance cost data and performance data are recorded and reported in accordance with TSA contractual requirements and self-reported contractor mean downtime data are valid, reliable, and justify the full payment of the contract amount. We provided a draft of this report to DHS for its review and comment. On July 24, 2006, we received written comments on the draft report. DHS, in its written comments, concurred with our findings and recommendations, and agreed that efforts to implement these recommendations are essential to a successful explosive detection systems program. DHS stated that it has initiated efforts to improve TSA’s management of EDS and ETD maintenance costs and strengthen oversight of contract performance. Regarding our recommendation that TSA establish a timeline to close out the Boeing contact and report to congressional committees on its actions to recover any excessive fees, DHS stated that TSA has conducted a contract reconciliation process to ensure that no fees would be paid on costs that exceeded the target due to poor contractor performance and that Boeing and TSA have reached an agreement in principle on this matter and the documentation is in the approval process with closure anticipated in July 2006. Regarding our recommendation to establish a timeline for completing the EDS life-cycle cost model, DHS stated that TSA expects to complete its prototype evaluation in September 2006 and that the EDS life-cycle cost model will be completed 12 months after the prototype has been approved. Regarding our recommendation to revise TSA policies and procedures to require documentation of its monitoring of EDS and ETD maintenance contracts, DHS stated that a TSA contractor is developing automated tools to perform multiple analyses of contractor- submitted data that DHS said would allow TSA to accurately and efficiently certify the contractors’ performance against their contractual requirements and would allow TSA to independently validate and verify maintenance and cost data. The department’s comments are reprinted in appendix II. We will send copies of this report to the Secretary of Homeland Security and the Assistant Secretary, Transportation Security Administration, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions or need additional information, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix III. H.R. Conf. Rep. No. 109-241, at 52 (2005). TSA interprets the term explosive detection system to include both explosive detection systems (EDS) and explosive trace detection (ETD) machines. maintenance of explosive detection systems (EDS) and explosive trace detection (ETD) machines? What factors played a role in EDS and ETD maintenance costs and what factors could affect future costs? What has TSA done to control EDS and ETD maintenance costs? To what extent does TSA oversee the performance of EDS and ETD maintenance contractors? To determine TSA costs to maintain EDS and ETD machines we reviewed TSA contract files and budget documents for fiscal years 2003 through 2007, and interviewed TSA headquarters officials, Department of Homeland Security Office of the Inspector General (DHS OIG) officials, and EDS and ETD contractor representatives. For purposes of our review, we focused on the amounts obligated under contracts to maintain the machines. We did not review TSA’s negotiations for maintenance services or the process for awarding contracts, nor did we assess other direct or indirect costs related to TSA or DHS employees engaged in contract administration or other related items. To determine what factors played a role in maintenance costs and what TSA has done to control costs, we: reviewed TSA contract files, acquisition and strategic plans, budget documents, TSA processes for reviewing contract cost and performance data, and a DHS OIG report;and interviewed TSA headquarters officials, DHS OIG officials, and EDS and ETD contractor representatives. To determine the extent of TSA contract oversight, we: reviewed TSA contract files and processes for reviewing contract performance data, interviewed TSA headquarters officials and EDS and ETD contractor representatives, and reviewed GAO standards for internal controls We performed our work from January 2006 through June 2006 in accordance with generally accepted government auditing standards. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999). According to TSA budget documents, TSA has obligated almost $470 million from fiscal year 2002 through fiscal year 2005 for EDS and ETD maintenance. In fiscal year 2006, TSA estimates it will spend $199 million and has projected it will spend $234 million in fiscal year 2007. TSA was unable to provide us data on maintenance cost per machine prior to fiscal year 2005 because, according to TSA officials, its previous contract with Boeing Service Company (Boeing) to maintain EDS and ETD machines was not structured to capture these data. TSA did not provide us with projections of EDS and ETD maintenance costs beyond fiscal year 2007, although TSA has negotiated maintenance prices per machine through fiscal year 2009. TSA was mandated to screen all checked baggage using explosive detection systems at airports by December 31, 2003.1 Explosive Detection Systems (EDS) use computer-aided tomography X-rays to recognize the characteristics of explosives. In general, EDS are used for checked baggage screening. Explosive Trace Detection (ETD) machines use chemical analysis to detect traces of explosive material vapors or residues. ETD machines are used for both passenger carry-on baggage and checked baggage screening. According to TSA budget documents, TSA will have deployed over 1,400 EDS and 6,600 ETD machines at baggage screening locations in over 400 airports nationwide by the end of fiscal year 2006. The Aviation and Transportation Security Act, Pub. L. No. 107-71 § 110(b), 115 Stat. 597, 615 (2001) mandated, among other things, that all checked baggage at U.S. airports be screened using explosive detection systems by December 31, 2002. Section 425 of the subsequently enacted Homeland Security Act of 2002, Pub. L. No. 107-296, 116 Stat. 2135, 2185-86, in effect, extended this mandate to December 31, 2003. See 49 U.S.C. § 44901(d). TSA is responsible for the EDS and ETD maintenance costs EDS and ETD maintenance includes preventative maintenance—scheduled activities to increase machine reliability that are performed monthly, quarterly, and yearly based on the contractors’ maintenance schedules and corrective maintenance—actions performed to restore machines to operating condition after failure. A TSA official told us that typical EDS warranties are for one year and that ETD warranties are for 2 years. From June 2002 through March 2005, Boeing was the prime contractor for the installation and maintenance of EDS and ETD machines at over 400 U.S. airports. TSA officials stated that the Boeing contract was awarded at a time when TSA was a new agency with many demands and extremely tight schedules for meeting numerous congressional mandates related to passenger and checked baggage screening. Boeing had a cost reimbursement contractwith TSA, which was competitively bid and contained renewable options to 2007. Firm fixed price contracts provide for a price that is not subject to any adjustment on the basis of the contractor’s cost experience in performing the contract. This contract type places upon the contractor maximum risk and full responsibility for all costs and resulting profit and loss. It provides maximum incentive for the contractor to control costs and perform effectively and imposes a minimum administrative burden upon the contracting parties. In March 2005, TSA signed firm fixed price contracts for EDS and ETD maintenance. TSA awarded a competitively bid contract to Siemens to provide maintenance for ETD machines. TSA negotiated sole source contracts with L-3 and GE InVision because they are the original equipment manufacturers and owners of the intellectual property of their respective EDS. TSA can exercise 4 1-year options on all three contracts through March 2009. In September 2005, TSA awarded a competitively bid firm fixed price contract to Reveal Imaging Technologies, Inc., (Reveal) for both the procurement and maintenance of a reduced-size EDS. According to TSA budget documents, TSA has obligated almost $470 million for EDS and ETD maintenance from fiscal years 2002 through 2005. EDS and ETD Machine Maintenance Budget Amounts, Fiscal Years 2002 through 2007 (In millions) maintenance grew from $14 million in fiscal year 2002 to an estimated $199 million in fiscal year 2006. In fiscal year 2007, TSA projects it will spend $234 million. Appropriated (as revised) 75 100 205 200 234 the maintenance cost per machine prior to fiscal year 2005 because, according to TSA officials, its previous contract with Boeing to maintain EDS and ETD machines was not structured to capture these data. According to TSA officials, in fiscal year 2004, TSA requested and received approval to reprogram about $32 million due to higher levels of maintenance costs than expected. In fiscal year 2005, TSA requested and received approval to reprogram $25 million to fund the L-3 contract ($16.6 million) and to closeout the Boeing contract ($8.4 million), which has yet to be closed. TSA officials did not provide us with projections of costs beyond 2007. However, current contracts have negotiated maintenance prices per machine through March 2009, if TSA decides to exercise option years in the contracts. Future EDS and ETD maintenance costs depend on decisions made as outlined in a February 2006 TSA strategic planning framework for screening checked baggage using EDS and ETD. Among other things, the plan discusses options for the deployment of new technologies and refurbishment of existing equipment. Different factors have played a role in costs to date and will influence future maintenance costs for EDS According to a September 2004 DHS OIG report, TSA did not follow sound contracting practices in administering the Boeing contract, which was primarily for the installation and maintenance of EDS and ETD machines. Among other things, the DHS OIG found that TSA had paid provisional award fees totaling $44 million through December 2003 without any evaluation of Boeing’s performance. GAO has identified similar instances of agencies’ failure to properly use incentives in making award fees. See GAO, Defense Acquisitions: DOD Has Paid Billions in Award and Incentive Fees Regardless of Acquisition Outcomes, GAO-06-66 (Washington, D.C.: December 2005). For EDS contracts, future labor and material costs could not be determined, so TSA negotiated an escalation factor to be used to determine pricing for the contract option years. For the ETD contracts, TSA determined after a review of cost data, that it would apply a 4 percent escalation factor to prices in the contract option years. The employment cost index is a measure of the change in the cost of labor, free from the influence of employment shifts among occupations and industries. The consumer price index is a measure of the average change in prices over time of goods and services purchased by households. machines deployed and out of warranty, conditions under which machines operate, mean downtime requirements, the emergence of new technologies or improved equipment, and alternative screening strategies. TSA’s February 2006 strategic plan framework for screening checked baggage over the next 20 years discusses factors that may impact future maintenance costs. For example, the framework discusses the refurbishment of existing machines and the deployment of new technologies, but does not outline the number of machines or specific time frames for implementation. Additionally, the impact of these strategies on future maintenance costs is unknown.1 If no new equipment or maintenance providers emerge, TSA may pay a premium in future sole source contracts where intellectual property rights are involved. For example, because L-3 and GE InVision had intellectual property rights on their machines, their maintenance contracts were not bid competitively and therefore prices were not subject to the benefits of market forces. TSA issued its strategic plan framework for screening checked baggage using EDS and ETD machines in response to various congressional mandates, congressional committee directives, and GAO recommendations. Siemens, L-3, and GE contracts before the maintenance contracts were executed and, as a result, TSA did not have a complete picture of all maintenance costs. In August 2005, TSA hired a contractor to define parameters for a lifecycle cost model. A TSA official told us that the contractor began work on a lifecycle cost model for EDS in February 2006 and did not know when the model would be completed. for up to five years if TSA exercises options to 2009. TSA did not provide per-unit maintenance costs prior to March 2005 because the Boeing contract was not structured to capture these data. ETD Smiths Ionscan 400A Smiths Ionscan 400AE Smiths Ionscan 400B Thermo EGIS 3000 Thermo EGIS II GE Iontrack Itemiser-W 241 5 3,038 2 425 2,302 10,525 10,525 8,580 12,899 13,134 $ 7,727 336 6 3,035 2 545 2,322 10,974 10,974 8,946 13,526 13,695 $ 8,057 NOTE: Maintenance costs represent the negotiated prices in the maintenance contracts for EDS and ETD machines. TSA included several contractor performance requirements in the Siemens, L-3, GE InVision, and Reveal contracts. Metrics related to Reliability, Maintainability, and Availability (RMA) of the machines must be reported to TSA.1 Specific cost data related to maintenance and repair must be reported to TSA. Contractors are required to meet monthly with TSA to review all pertinent technical, schedule, and cost aspects of the contract, including an estimate of the work to be accomplished in the next month; performance measurement information; and any current and anticipated problems. Includes metric s such as m ean tim e between failur es (gener ally the total time a m achi ne is av ailable to perform its required missi on divided by the num ber of failur es over a given per iod of time) and oper ational availability (generally the percentage of time, duri ng operational hours, that a machine is available to perform its requir ed mission). Such reliability, maintainability, and availability data ar e standard and appropri ate perform anc e requirements for maintenanc e contracts. Provisions in the L-3 and GE InVision contracts specify that the agreed price for maintaining EDS will be paid only if the contractor performs within specified mean downtime (MDT) requirements. MDT is calculated by the number of hours a machine is out of service in a month divided by the number of times that machine is out of service per month. Contractors submit monthly invoices for 95 percent of the negotiated contract price for the month and then submit an MDT report to justify the additional 5 percent. Consequently, if the contractor fails to fulfill the MDT requirements, it is penalized 5 percent of the negotiated monthly maintenance price. As of February 2006, neither GE InVision nor L-3 have been penalized for missing their MDT. The allowable MDT is lowered from 2005 to subsequent renewable years in the contract, as shown in the table below. TSA’s acquisition policiesand GAO’s standards for internal controlsTSA officials provided no evidence that they are reviewing maintenance cost data provided by the contractor because they are not required to document such activities. For example, even though TSA officials told us they are reviewing required contractor data, including actual maintenance costs related to labor hours, costs associated with replacement parts, and the costs of shipping machine parts, they did not have any documentation to support this. TSA officials told us that they have begun to capture these data to assist them in any future contract negotiations. TSA officials provided no evidence that performance data for corrective and preventative maintenance required under the contract is being reviewed. TSA officials told us that they perform such reviews, but do not document their activities since there are no TSA policies or procedures requiring them to do so. Therefore, TSA could not provide assurance that contractors are complying with contract performance requirements. For example, although TSA documents monthly meetings with contractors to discuss performance data, TSA did not provide evidence that it independently determines the reliability and validity of data required by the contracts, such as mean time between failures and mean time to repair, which are important to making informed decisions about future purchases of EDS and ETD equipment and their associated maintenance costs. GAO/AIMD-00-21.3.1. For EDS contracts with possible financial penalties, TSA officials told us that they review contractor-submitted mean downtime data on a monthly basis to determine the reliability and validity of the data and to determine whether contractors are meeting contract provisions or should be penalized. However, TSA officials said they do not document these activities because there are no TSA policies or procedures to do so. As a result, without adequate documentation, there is no assurance on whether contractors are meeting contract provisions that TSA has ensured that it is making appropriate payments for services provided. TSA’s move to firm fixed price maintenance contracts was advantageous to the government in that it helps control present and future maintenance costs. Firm fixed price contracts also help ensure price certainty and therefore are more predictable. Unresolved issues remain with the past contractor, specifically fees awarded to former contractor Boeing that may have been excessive due to a lack of timely evaluation of the contractor’s performance. Although TSA has begun to develop a lifecycle cost model in order to control costs and negotiate future contracts, TSA has not set a timeframe to complete this model. Without such a time frame, TSA may not be identifying cost efficiencies and making informed procurement decisions. Further, TSA must provide evidence of its reviews and analyses of contractor-submitted data and perform analyses of contractor data to determine the reliability and validity of the data and to provide assurance of contractor compliance with contract performance requirements and internal control standards. Without stronger oversight, TSA will not have reasonable assurance that contractors are performing as required and that full payment is justified based on meeting mean downtime requirements. strengthen oversight of contract performance, we recommend that the Secretary of Homeland Security instruct the Assistant Secretary, Transportation Security Administration to take the following three actions report to the congressional appropriations committees on its actions, including any necessary analysis, to address the DHS OIG recommendation to recover any excessive fees awarded to Boeing; establish a time line for completing a lifecycle cost model for EDS, which TSA revise its policies and procedures to require documentation of its monitoring of EDS and ETD maintenance contracts to provide reasonable assurance that contractor maintenance cost data and performance data are recorded and reported in accordance with TSA contractual requirements and self-reported contractor mean downtime data are valid, reliable, and justify the full payment of the contract amount. TSA reviewed these slides in their entirety and provided several technical comments, which we incorporated as appropriate. TSA officials told us that they are not making formal comments on our recommendations. In addition to the individual names above, Charles Bausell, R. Rochelle Burns, Glenn Davis, Katherine Davis, Michele Fejfar, Richard Hung, Nancy Kawahara, Dawn Locke, Thomas Lombardi, Robert Martin, and William Woods.
|
Mandated to screen all checked baggage by using explosive detection systems at airports by December 31, 2003, the Transportation Security Administration (TSA) has deployed two types of screening equipment: explosive detection systems (EDS), which use computer-aided tomography X-rays to recognize explosives, and explosive trace detection (ETD) systems, which use chemical analysis to detect explosive residues. This report discusses (1) EDS and ETD maintenance costs, (2) factors that played a role in these costs, and (3) the extent to which TSA conducts oversight of maintenance contracts. GAO reviewed TSA's contract files and processes for reviewing contractor cost and performance data. TSA obligated almost $470 million from fiscal years 2002 through 2005 for EDS and ETD maintenance, according to TSA budget documents. In fiscal year 2006, TSA estimates it will spend $199 million and has projected it will spend $234 million in fiscal year 2007. TSA was not able to provide GAO with data on the maintenance cost per machine before fiscal year 2005 because, according to TSA officials, its previous contract with Boeing to install and maintain EDS and ETD machines was not structured to capture these data. Several factors have played a role in EDS and ETD maintenance costs. According to a September 2004 Department of Homeland Security's Office of Inspector General report, TSA did not follow sound contracting practices in administering the contract with Boeing, and TSA paid provisional award fees totaling $44 million through December 2003 without any evaluation of Boeing's performance. TSA agreed to recover any excessive award fees paid to Boeing if TSA determined that such fees were not warranted. In responding to our draft report, DHS told us that TSA and Boeing had reached an agreement in principle on this matter and that documentation was in the approval process with closure anticipated in July 2006. Moreover, TSA did not develop life-cycle cost models before any of the maintenance contracts were executed and, as a result, TSA does not have a sound estimate of maintenance costs for all the years the machines are expected to be in operation. DHS also stated in its comments on our draft report that a TSA contractor expected to complete a prototype life-cycle cost model by September 2006 and that TSA anticipated that the EDS model would be completed 12 months after the prototype was approved. Without such an analysis, TSA may not be identifying cost efficiencies and making informed procurement decisions on future purchases of EDS and ETD machines and maintenance contracts. TSA has taken actions to control costs, such as entering into firm-fixed-price contracts for maintenance starting in March 2005, which have advantages to the government because price certainty is guaranteed. Further, TSA incorporated standard performance requirements in the contracts including metrics related to machine reliability and monthly performance reviews. For EDS contractors, TSA has specified that the full agreed price would be paid only if mean downtime (i.e., the number of hours a machine is out of service in a month divided by the number of times that machine is out of service per month) requirements are met. Although TSA has policies for monitoring contracts, TSA officials provided no evidence that they are reviewing required contractor-submitted performance data, such as mean downtime data. TSA officials told GAO that they perform such reviews, but do not document their activities because there are no TSA policies and procedures requiring them to do so. As a result, without adequate documentation, TSA does not have reasonable assurance that contractors are performing as required and that full payment is justified based on meeting mean downtime requirements.
|
The difference in price between what farmers receive for their raw milk and what consumers pay for fluid milk products has increased in recent years. This growing spread between farm and retail prices may be attributable to a number of factors at each level of the milk marketing chain, including supply and demand forces, changes in input costs to processing and retailing, and the continued concentration of cooperatives, wholesale milk processors, and retailers. A variety of federal policies exist to influence the prices that farmers receive. However, the effects of these policies may not be uniform; they can affect different sizes of farms or regions of the country in different ways. Moreover, policies that benefit farm income may adversely affect other policy considerations such as economic efficiency and federal costs. Given the complexity of federal dairy policy, the decision to change existing policies or introduce new policies requires consideration of a variety of these potential effects. Examining the effects of policy alternatives on a variety of different policy considerations will help the Congress formulate federal dairy policy based on comprehensive analyses that consider these alternatives in relation to their effects on different considerations, farm sizes, and regions of the country. In addition, although recent USDA studies have examined some policy options, there are other potential policy options to consider, as discussed in this report. To continue the facilitation of informed decision making by USDA and the Congress, we recommend that the Secretary of Agriculture build on GAO’s analysis of the potential effects of various dairy policy options as USDA proposes future changes to current dairy laws or regulations or provides information to the Congress in response to congressional proposals. We provided a draft of this report to USDA and DOD for their review and comment. We received written comments from USDA’s Under Secretary for Farm and Foreign Agricultural Services and Under Secretary for Marketing and Regulatory Programs, which are presented in appendix VIII. USDA also provided suggested technical corrections, which we have incorporated into this report, as appropriate. These technical corrections were offered by several USDA agencies, including the Agricultural Marketing Service, Economic Research Service, Farm Service Agency, and Office of the Chief Economist. DOD had no comments on the draft report. In its written comments, USDA said the information provided in the report on milk prices at the farm, cooperative, and retail levels is valid. However, USDA said it has reservations regarding our use of prices paid for fluid milk at commissaries as an indicator of the wholesale price of fluid milk and that we should make clear the weaknesses of using commissary price data. USDA acknowledged, however, that there seems to be no viable alternative. During the course of our work, we were unable to obtain wholesale price data because these data are considered proprietary by industry officials. After consulting with USDA officials and other dairy experts, we determined that commissary price data were the best surrogate because commissaries generally sell milk at a standard 5 percent markup from cost. Based on USDA’s comments, we expanded the discussion in the report of the potential limitations of using commissary data. USDA said it largely agrees with the report’s discussion of the factors that influence the price of milk as it moves from the farm to the consumer and the report’s characterization of economic studies of price transmission in the U.S. fluid milk market. However, USDA expressed some concerns regarding the report’s discussion of recent federal dairy program changes and alternative policy options. First, USDA said that this discussion appears to be a compilation of policy recommendations that are examined independently and qualitatively within the existing program structure. Our discussion of dairy policy options are not policy recommendations. As stated in the report, to identify these policy options and their potential impacts we relied heavily on a synthesis of the views of leading dairy experts and the results of an extensive literature search, including our review of more than 50 studies and other publications. Time and resource constraints for completing our work precluded us from developing or contracting for the use of an economic model that would have provided quantitative estimates of these potential impacts. In addition, some of the policy options would have been difficult to model and quantify, such as the potential impacts of accelerating USDA’s hearing and rulemaking process for amending FMMOs. The report also notes that we compared the policy options identified against a baseline scenario of policies in place as of August 2004. This baseline scenario existed at the start of our work and was needed to provide a consistent context for our analysis. Second, USDA suggested that we make clear the caveats of this type of analysis. As noted in the report, we examined the impact of federal dairy program changes and policy options on six policy considerations: farm income, milk production, federal costs, price volatility, economic efficiency, and consumer prices. We acknowledge that other stakeholders may have different views on the importance of these policy considerations, or other considerations that we did not include in our analysis. The report also states that the potential effects of policy options on these considerations could vary depending upon economic conditions and other policy decisions. In this regard, we did not assess the options’ overall economic or budgetary impacts, or their consistency with U.S. international trade commitments or positions in ongoing negotiations. In addition, the report does not identify a preferred option or combination of options. As indicated in the report, each option has varying potential impacts on the policy considerations used in our analysis. Despite these caveats, we believe this analysis is informative and helpful to congressional decision makers who must weigh competing interests in determining dairy policy. USDA also said that in some cases the report mischaracterizes the operation of current programs and the effects that changes to current programs or the introduction of new programs would have on program outlays, producers, and consumers. For example, USDA noted that the report offers several options for improving the operation of the Dairy Export Incentive Program (DEIP), including expanding the use of this program. However, USDA indicated that expanding the use of DEIP is not a legitimate option because, under World Trade Organization (WTO) rules, DEIP is bound by quantitative and monetary caps and product-specific restrictions that limit its use to the current range of eligible dairy commodities. We do not agree that we mischaracterized the operation of this program. The report clearly states that USDA has announced and awarded subsidies under DEIP to the limits allowed by WTO rules for nonfat dry milk and various cheeses. Regarding expansion, the report discusses options suggested by dairy experts for the additional use of this program as an effective marketing tool, and does not call for expanding its use to exceed relevant WTO caps or restrictions. However, we have adjusted the language in the report to make this distinction clearer. USDA also offered several comments regarding the FMMO program. Among these, USDA said that the report is incorrect in stating that the objective of this program is “to ensure an adequate level of milk production.” According to USDA, this objective is associated with the Dairy Price Support Program. We have revised the report to reflect this clarification and added language suggested by USDA to better describe the FMMO program’s objectives. In addition, USDA raised concerns about the practicality of implementing some of the options discussed in the report, particularly (1) adopting a competitive pay price to establish class prices under the FMMO program and (2) combining Class III and Class IV into a single manufacturing class. Regarding the first, USDA said that it and a committee of academicians spent considerable time several years ago trying to devise a competitive price series that could be used to establish minimum class prices. However, this effort was unsuccessful. USDA said that our report does not identify or indicate how to create such a price series. Similarly, regarding combining Class III and Class IV, USDA notes that no specifics are offered in the report as to how milk in such a class would be priced. We acknowledge that the report does not explain how a competitive price series could be created or how milk would be priced if the classes were merged. However, these options were identified by stakeholders during the course of our work. Other options discussed in the report also may present challenging implementation issues and in many cases the report discusses those issues. Finally, USDA said that it does not believe the hearing and rulemaking process it uses to modify FMMOs inhibits its ability to respond to changing market conditions or the marketing of new dairy products. However, as discussed in the report, some stakeholders cited the slowness of this process as a concern. In addition, the report discusses USDA’s efforts to improve this process to more quickly respond to problems or needed changes while ensuring the promulgation of economically sound regulation. USDA did not comment on the report’s recommendation. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies of the report to the Senate Committee on Agriculture, Nutrition, and Forestry; the House Committee on Agriculture; other appropriate congressional committees; interested Members of Congress; the Secretary of Agriculture; the Secretary of Defense; the Director of the Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix IX. In March 2003, Senator Snowe, Chair, Committee on Small Business and Entrepreneurship, joined by Senators Clinton, Collins, Dodd, Jeffords, Kennedy, Kerry, Leahy, Mikulski, Rockefeller IV, Sarbanes, Schumer, and Specter requested that GAO examine a number of issues concerning the pricing and marketing of milk in the United States. Specifically, they asked us to update the information contained in our 2001 report, entitled Dairy Industry: Information on Milk Prices and Changing Market Structure (GAO-01-561, June 15, 2001), and to address other issues. This report examines (1) what portion of the retail price of fluid milk is received by dairy farmers, dairy cooperatives, wholesale milk processors, and retailers in selected markets throughout the United States, how this distribution has changed over the period of our review, and the relationships among price changes at these levels; (2) how various factors, such as costs, influence the price of milk as it moves from the farm to the consumer, as well as how these and other factors affect the extent to which changes in price are transmitted among levels as milk moves from the farm to the consumer; and (3) how changes in dairy policies and alternative policy options have affected or might affect farm income, federal costs, economic efficiency, and consumer prices, among other policy considerations. It also updates other information on milk prices included in our June 2001 report. To update our information on the price distribution among the various levels of the milk marketing chain, we analyzed milk prices in 15 selected markets nationwide: Atlanta, Georgia; Boston, Massachusetts; Charlotte, North Carolina; Cincinnati, Ohio; Dallas, Texas; Denver, Colorado; Miami, Florida; Milwaukee, Wisconsin; Minneapolis, Minnesota; New Orleans, Louisiana; Phoenix, Arizona; Salt Lake City, Utah; San Diego, California; Seattle, Washington; and Washington, D.C. In selecting these markets, we ensured that (1) they provided national geographic coverage; (2) at least one market was located in each of the federal milk marketing orders (FMMOs) as they existed during most of the period from October 2000 through May 2004; (3) the selected markets included both state and federally regulated markets; and (4) these areas represented similar marketing areas for which we reported information in our June 2001 report. For the 15 markets, we collected data on the prices received by farmers, cooperatives, wholesale milk processors, and retailers for October 2000 through May 2004. We limited our data collection efforts to the prices of whole, 2 percent, 1 percent, and skim milk because sales of these milk types constitute over 93 percent of fluid milk sales annually. We also confined our analysis to the prices of these milk types sold in gallon containers because milk sold in gallon containers accounts for about 65 percent of fluid milk products sold under FMMOs. There is no precise method for calculating the price that farmers receive for raw milk that is ultimately processed and sold in fluid milk products because dairy farmers receive a blend price for their milk, which is the average price for milk used for fluid and manufactured products. Therefore, any calculation of the value received by farmers for raw milk that is to be used for fluid milk products is necessarily only approximate. To estimate a farm price for raw milk used in fluid milk products, we used data provided by the U.S. Department of Agriculture’s (USDA) Agricultural Marketing Service (AMS). AMS developed an adjustment, which accounts for various charges such as hauling and marketing fees, that we subtracted from the announced cooperative Class I price to obtain the estimated farm price for raw milk used in fluid milk products for each of the selected markets in our review except San Diego, which is not part of the FMMO system. AMS’s adjustment accounts for farm-to-plant hauling costs, cooperative dues and capital assessment, mandatory advertising and promotion costs, competitive and receiving credits, and a representative estimate of the value of reimbursements to cooperatives for the services performed for handlers and for transportation costs not covered by the order minimum price. Most of the items that make up the adjustment are not available for the specific fluid milk market that we selected, but rather are based on information collected for milk used over wider geographical areas. Therefore, an order-wide value used for any of these items provides an estimate rather than the actual value for this item. Also, the values for two of the adjustment items—reimbursements to cooperatives for services performed for handlers and for transportation costs not covered by the order minimum price—were not readily available so they were estimated indirectly based on other reported data and, in some cases, on anecdotal information provided by industry members. However, despite these limitations, AMS believes that the estimated farm price is a good representation of the price that dairy farmers receive for raw milk used in fluid milk products. For the farm price for San Diego—a state-regulated market—we used mailbox price data collected by the California Department of Food and Agriculture. The mailbox price is the weighted average of the prices received by dairy farmers in the market for all of their raw milk sold and therefore is computed as the total net dollars received for milk divided by the total pounds of milk marketed. This price is likely to be lower than the price received for milk used for fluid purposes because the prices for milk used for manufacturing purposes are generally lower. However, it is the best measure we could obtain. To determine cooperative prices, we used AMS data on announced cooperative prices to represent prices that wholesale milk processors paid to cooperatives. Wholesale milk processors in federally regulated markets generally purchase milk from cooperatives and pay the federal minimum price for milk plus premiums that are negotiated between cooperatives and wholesale milk processors. The announced cooperative price is the Class I milk price announced by the major cooperative in each of the markets. This price does not apply to all Class I sales in federally regulated markets and is not necessarily the price actually received for all of the milk sold by the major cooperative; the announced cooperative prices have not been verified by USDA as actually having been paid by processors. For San Diego, we used the minimum fluid prices established by the state of California. Data on the premiums paid in excess of these minimums were not available for this market. (See app. V for a detailed discussion of over- order premiums.) To determine wholesale prices, we used the prices paid at Department of Defense Commissary Agency locations. The Defense Commissary Agency purchases milk under competitive and noncompetitive contracts with wholesalers. We used commissary prices as surrogates for privately established wholesale prices because (1) defense commissaries sell groceries at a standard 5 percent markup from cost to active and retired military personnel and (2) wholesale price data are considered proprietary by industry officials and were not available to us. The commissary network of stores ranks twelfth in the United States in sales volume for supermarket chains. We selected 39 different commissary locations near the 15 markets we reviewed, and the Defense Commissary Agency provided us with weekly prices paid by consumers at these locations for gallons of whole, 2 percent, 1 percent, and skim milk. We averaged these weekly prices to obtain monthly prices. We then adjusted these monthly prices to account for the 5 percent markup. Where we had multiple commissary locations for a market, we averaged the adjusted monthly prices to obtain a wholesale price for the market. We recognize that these locations may not provide an ideal match with other price data analyzed for a given location; for example, in some markets the available commissary locations were not in close proximity to the selected marketing areas. Also, wholesale processors may provide these commissary locations with different levels of service than they do for retailers in these markets. In such cases, the prices paid by these commissaries for fluid milk may have been different than the prices that retailers in the selected markets paid to their wholesale suppliers. However, these were the best wholesale data that we could obtain. In those locations where commissaries sold more than one brand of milk, we used the price for the brand that had the highest sales volume for a particular period. For retail prices, we contracted with Information Resources, Inc., a private data collection and analysis company, to obtain average weekly retail prices for whole, 2 percent, 1 percent, and skim milk sold in gallon containers. These data represented a weighted average of prices at supermarkets with yearly sales exceeding $2 million for the markets included in our analysis. We then averaged these weekly prices to obtain monthly prices. We were unable to obtain data from some types of nonsupermarket retailers such as mass merchandisers, thus the retail pricing data that we present may not be representative of fluid milk prices at those locations. Figure 1 shows the locations of the 15 selected markets, the corresponding commissaries, and the federal milk marketing order areas. To determine (1) the portion of the retail price of a gallon of milk received by farmers, cooperatives, wholesale milk processors, and retailers; (2) how changes in retail and farm prices affect the farm-to-retail price spread; and (3) how price changes at any level of the marketing chain correlate to changes in prices at other levels, we limited our analysis to 2 percent milk, which currently represents the largest volume of reduced-fat milk sold nationwide. Therefore, our analysis of 2 percent prices may not necessarily reflect pricing patterns and trends for the other three kinds of milk. Appendix II includes graphs that show the relationships among the farm, cooperative, wholesale, and retail prices for a gallon of 2 percent milk for each of the 15 markets. Because farm and cooperative prices reflect a higher milkfat content than that in 2 percent milk, we adjusted these prices to reflect the value of removing milkfat and replacing it with skim milk. This adjustment allowed us to use farm and cooperative prices that were comparable to the wholesale and retail prices for our analysis. To determine the degree that farm and retail prices had changed and the effect these changes had on the farm-to-retail price spread from October 2000 through May 2004 for each of the 15 markets, we used a statistical procedure to estimate farm-level and retail prices at the beginning and end of the period. We relied on estimated rather than actual prices to reduce the influence of the starting and ending months and years selected for our analysis in markets in which milk prices varied from month to month. We used the differences between the estimated initial and final prices to represent the changes during the period. When our statistical procedure did not find a consistent association between prices and time, we treated the difference in the estimated initial and final prices as zero. We calculated the change in the farm-to-retail price spread as the estimated retail price difference minus the estimated farm price difference. To describe the relationship between price changes at any given level in the milk marketing chain and price changes at the other levels, we tested for correlations between price changes at the various levels for each of the 15 markets included in our analysis. Specifically, we calculated coefficients describing the degree of correlation between changes in farm prices and price changes at the cooperative, wholesale, and retail levels; price changes at the cooperative level and price changes at the wholesale and retail levels; and price changes at the wholesale and retail levels. In appendix II, we report those correlation coefficients and indicate which are statistically different from zero at the 95 percent confidence level. To update information provided in our June 2001 report on the retail prices for four kinds of milk, we analyzed the retail price data that we obtained from Information Resources, Inc. We array these data in appendix III for each of the selected 15 markets for October 2000 through May 2004. To update information provided in our June 2001 report on average monthly and annual farm and cooperative prices, and wholesale and retail prices for different kinds of milk, we analyzed data obtained from USDA, the California Department of Food and Agriculture, the Department of Defense Commissary Agency, and Information Resources, Inc. We report these data in appendix IV for each of the selected 15 markets for October 2000 through May 2004. To update our information on the major factors influencing milk prices and explore price transmission within the milk marketing chain, we conducted more than 50 interviews with national dairy experts working with the federal and state governments, cooperatives, processors, retailers, or industry groups, or in academia. We also reviewed a number of relevant studies and publications from USDA and other sources. Where possible, we obtained data on production costs, services provided by cooperatives, as well as inputs to processing and retailing. We also obtained information on concentration and market power at each level of the milk marketing chain. We present information on the factors influencing the price of milk in appendix V. To compare the results and methodologies of various studies looking at the issue of price transmission in fluid milk marketing from the farm to the retail level, we performed a technical review of 14 academic studies conducted over the past 10 years, looking at model descriptions, assumptions, and results. We also spoke with the economists involved in these studies concerning their model results and the causes of differences in fluid milk price transmission across markets. The scope of these studies encompassed national, regional, and city-level models of fluid milk price transmission. Appendix VI provides a summary of our review of price transmission and the various price transmission studies. To identify and examine the effects of federal dairy program changes and alternative policy options, we contacted many of the same dairy experts previously mentioned. We also conducted an extensive literature search and reviewed more than 50 relevant studies and other publications we identified. We qualitatively analyzed the effects of federal dairy program changes and policy options we identified on six main policy considerations: farm income, milk production, federal costs, price volatility, economic efficiency, and consumer prices. We evaluated impacts on these policy considerations under both high- and low-price scenarios, over the short and long terms. We identified these policy considerations by reviewing previous GAO reports, relevant studies, and legislation, as well as through our conversations with dairy policy experts. Different stakeholders in the dairy policy arena may have alternative views on the relative importance of these policy considerations, as well as other considerations that we did not include, which could lead to differing perspectives on these options. In addition, the potential effects of policy options on these considerations could vary depending upon economic conditions and other policy decisions. We compared the dairy policy options we identified against a baseline scenario of the policies in place as of August 2004: FMMO regulations, a Milk Income Loss Contract (MILC) program that is scheduled to expire at the end of fiscal year 2005, a price support program at $9.90 per hundredweight, a Dairy Export Incentive Program (DEIP), trade restrictions, and milk regulatory policies in some states. We include a discussion of the effects of recent federal dairy program changes and alternative policy options in appendix VII. We conducted our review from September 2003 through October 2004 in accordance with generally accepted government auditing standards. We did not independently verify the data we received from various sources. However, we discussed with these sources the measures they take to ensure the accuracy of the data, and these measures seemed reasonable. Additionally, we consulted with the following dairy experts concerning the results of our analysis of price transmission within the milk marketing chain and the effects of changes in federal dairy programs and alternative policy options: Ed Jesse, Ph.D., Professor, Department of Agricultural and Applied Economics, University of Wisconsin–Madison; Daniel Lass, Ph.D., Professor, College of Natural Resources and the Environment, University of Massachusetts, Amherst; Richard Sexton, Ph.D., Professor, Department of Agricultural and Resource Economics, University of California–Davis; and Mark Stephenson, Ph.D., Senior Extension Associate, Department of Applied Economics and Management, Cornell University. This appendix reports on our analysis of prices at four marketing levels for a gallon of 2 percent milk in 15 selected markets for October 2000 through May 2004. Our analysis includes information on (1) the portion of the retail price of a gallon of milk received by farmers, cooperatives, wholesale milk processors, and retailers; (2) how changes in farm and retail milk prices affect the farm-to-retail milk price spread; and (3) how price changes at any level of the marketing chain correlate with changes in prices at other levels. We limited our analysis to gallons of 2 percent milk because sales of milk with reduced fat content account for nearly 52 percent of all sales of fluid milk and sales of 2 percent milk account for about 62 percent of these reduced-fat sales. The farm and cooperative prices used in our analysis and presented in this appendix have been adjusted to reflect 2 percent milkfat. This analysis may not reflect pricing patterns and trends for other kinds of milk. We present complete data for prices for all four types of milk—whole, 2 percent, 1 percent, and skim—in appendix III. Between October 2000 and May 2004, on average, our data suggest that farmers received 45.9 percent, cooperatives 6.1 percent, wholesale processors 35.6 percent, and retailers 12.5 percent of the retail price of a gallon of 2 percent milk in the 15 markets we reviewed. However, these percentages varied depending on the specific market. For example, the farmers’ portion ranged from 36.0 percent to 58.6 percent, while retailers in 12 markets received anywhere from 3.5 percent to 44.1 percent. In comparison, the average percentages we reported in 2001 for the period March 1998 through September 2000 were 43 percent, 5 percent, 33 percent, and 19 percent, respectively, for farmers, cooperatives, wholesale processors, and retailers. Table 1 summarizes the price breakdown for each market. From October 2000 through May 2004, the spread between farm and retail milk prices increased in 12 of the 15 markets. However, in some of the 12 markets, the spread between farm and retail milk prices increased dramatically and then moderated. In 9 of the 15 markets, retail prices showed a statistically significant increase. In 4 of the remaining markets, retail prices decreased over time; in the other 2 markets, retail prices showed no statistically significant change. At the same time, farm prices decreased in 12 of the 15 markets and increased in the remaining 3 markets over the 44-month period. However, these declining farm prices began to moderate, or, in most cases, began to rise during the latter months of our period of analysis. Table 2 provides these data for selected markets. We found that price changes generally correlated across levels in the marketing chain, with the strongest correlations occurring between adjacent levels. The values of correlation coefficients presented are estimates of the degree that price changes at one level in the milk marketing chain are associated with price changes at other levels. The higher the coefficient, the closer the association between changes in prices at different levels. Changes in cooperative prices, in general, were strongly correlated with changes in wholesale prices. However, changes in cooperative prices correlated less strongly with changes in retail prices. As discussed in appendix V, many factors other than farm or wholesale prices influence the retail price of fluid milk. Correlation coefficients between prices at different marketing levels varied across markets. For example, correlations between cooperative and wholesale prices in individual markets range from a high of 0.982 to a low of –0.031. We ranked the 15 markets by the extent of correlation between cooperative and wholesale prices. The correlation coefficient for the market that fell in the middle of this ranking was 0.788. In comparison, the market in the middle of a similar ranking for the time period analyzed in our 2001 report had a lower correlation coefficient between these prices, 0.716. Similarly, correlations between cooperative and retail prices in individual markets range from a high of 0.879 to a low of 0.214. We did a comparable ranking of the 15 markets by the extent of correlation between cooperative and retail prices. The correlation coefficient for the market that fell in the middle of this ranking was 0.588. In comparison, the market in the middle of a similar ranking for the time period presented in our 2001 report again had a lower correlation coefficient between these prices, 0.483. Tables 3 through 5 present data from our correlation analysis of price changes across marketing levels. Tables 6 through 10 show the average annual price for a gallon of 2 percent milk in the 15 markets for each of the four marketing levels during part of 2000, all of 2001, 2002, and 2003, and part of 2004. Figures 2 through 16 present average monthly data for the period October 2000 through May 2004 on farm, cooperative, wholesale, and retail prices for gallons of 2 percent milk in each of the 15 markets. Gaps in any of the lines shown in the figures indicate that data were unavailable for those months. This appendix updates information provided in our June 2001 report on the average retail prices for whole, 2 percent, 1 percent, and skim milk in 15 selected markets for October 2000 through May 2004. We found that retail pricing patterns varied significantly across markets. For example, in the Boston market from October 2000 through May 2004, the average price for 2 percent milk was generally the same as the average price for 1 percent milk; however, whole and skim milk prices were generally lower. On the other hand, for this period in the San Diego market, the average price of 2 percent milk was generally lower than the prices of whole and 1 percent milk, but higher than skim milk prices. Figures 17 through 31 provide information on the average retail price for the four kinds of milk in the 15 selected markets for October 2000 through May 2004. This appendix updates information provided in our June 2001 report on average monthly and annual farm and cooperative prices of raw milk and on the average monthly and annual wholesale and retail prices for a gallon of whole, 2 percent, 1 percent, and skim milk. Tables 11 through 25 provide these data for 15 selected markets over the period October 2000 through May 2004. The prices that farmers, cooperatives, wholesale processors, and retailers receive are determined by the interaction of many factors, such as forces affecting the supply of raw milk and manufactured and fluid milk products, consumer demand for manufactured and fluid milk products, federal and state dairy programs, the level of services provided by dairy cooperatives, market structure at various levels of the marketing chain, and other input costs of processing and retailing. Dairy farmers receive a price for raw milk, and each entity involved in the processing and marketing of fluid milk adds value to the product and retains a portion of the difference between the farm and retail prices. (This difference is known as the price spread.) This appendix examines the key factors that influence milk prices at the different levels of the marketing chain. Supply and demand forces, which in turn are influenced by federal and state dairy programs, determine farm prices for the raw milk that is sold for use in fluid milk and other dairy products. For example, in recent months, a variety of supply and demand forces have come together to significantly increase farm milk prices. On the supply side, the available supply of raw milk has been reduced by farmers cutting back production due to a previous period of low prices and by the closing of the Canadian border to replacement cows as a result of concerns about mad cow disease. On the demand side, consumer demand for nonfluid dairy products has increased as consumers resumed eating out following the attacks of September 11, 2001, and as dietary trends, such as the rising popularity of low- carbohydrate diets, have changed. While these forces have been driving recent price trends, federal and state dairy programs continue to influence milk prices. For example, major domestic programs such as federal milk marketing orders (FMMOs) and price supports help individual farmers who lack market power compared to other entities such as wholesale processors and retailers and help to ensure that farm prices do not fall below a minimum level. At the same time, U.S. import restrictions maintain domestic dairy prices at levels higher than average international market prices by limiting the quantities of milk products that are imported into the country. The quantity of raw milk that dairy farmers supply (production) is determined by the operating costs of producing that milk, such as feed and fuel, ownership costs for dairying equipment, land costs, and labor costs, as well as the price that farmers expect to receive for milk (as based on demand). Of these costs, the 2002 annual report on the costs of milk production in California, the largest milk producing state, showed that the highest cost is feed, at 44 percent of milk production costs. Other major costs include replacement cows (14 percent), other operating expenses (13 percent), and labor (11 percent). Milk production can also vary seasonally, according to weather, and is affected by farmers’ management practices. In February 2004, the U.S. Department of Agriculture (USDA) published a report on the characteristics and costs of milk production in the United States, which found that dairy farmers in the West had a cost advantage over farmers in other regions because western operations were appreciably larger. Farms with 500 or more milk cows had substantially lower total operating and ownership costs, averaging $11.60 per hundredweight of milk sold. This cost advantage arises because as herd size increases, associated increases in fixed costs, such as capital investments, are spread proportionally over a larger amount of production, thereby lowering the fixed costs per hundredweight of milk produced. USDA found that the average herd size of low-cost operations was more than three times the size of high-cost operations. Table 26 shows the average ownership and operating costs by region and herd size in 2000. The USDA study also reported that while milk is produced in all 50 states, the top 5 milk-producing states in 2000—California, Wisconsin, New York, Pennsylvania, and Minnesota—accounted for 53 percent of total milk produced. Growth in the importance of western regions as major sources of milk over the past 25 years is a significant feature of the United States dairy industry. For example, in 1975, midwestern states such as Iowa, Ohio, and Missouri were prominent among the top 10 dairy producing states. By 2000, production in Idaho, New Mexico, and Washington State surpassed production in these traditional dairy states. Milk production is consolidating so that farms with larger numbers of cows account for a growing share of production. In 1993, farms with 100 or more cows accounted for 14 percent of all U.S. dairy farms, over half of all cows, and over 55 percent of the milk produced. As of 2000, these numbers had grown to 20 percent of all dairy farms, 66 percent of all cows, and more than 70 percent of milk produced. Significantly, farms with 500 or more cows accounted for 3 percent of all dairy farms, but 35 percent of cows and 31 percent of milk produced in 2000. The production of milk has unique characteristics that distinguish it from other agricultural products and that cause relatively small changes in supply or demand to result in relatively large changes in price, particularly at the farm level. Farmers employ specialized assets or equipment to produce milk, and they have limited ability to use their farms, cows, and equipment for other purposes during periods of low prices. This limited flexibility creates a relatively inelastic supply of milk with respect to price. Demand for raw milk is mainly derived from consumer demand for fluid milk and manufactured milk products. Consumer demand for different fluid or manufactured milk products affects the price of raw milk used for other products because increased consumer demand for one particular product–causing more of that product to be produced--reduces the supply of raw milk available for other products, thus increasing the price that manufacturers of other products must pay to acquire raw milk. Over the long term, per capita demand for fluid milk products has been steadily declining, in large part because consumers have substituted carbonated soft drinks and other beverages for fluid milk. On the other hand, consumer demand for milk products has varied based on dietary considerations, such as the rising popularity of low-carbohydrate diets and changes in food consumption patterns, such as an increase in the amount of food consumed away from the home. Despite some evidence that consumer demand for fluid milk has become more price elastic, it remains relatively price inelastic compared to the demand for many other products. Given the relative inelasticities of milk supply and demand with respect to price, a number of sources indicated that recent changes in the amount of raw milk produced, when combined with changes in demand, have affected farm prices. After late 1999, farm prices began falling in response to the production surplus that existed at that time. In addition, a number of dairy experts indicated that following the terrorist attacks of September 11, 2001, prices began to fall even further as people stopped eating out as much, thereby reducing the demand for manufactured milk products such as cheese. This reduced consumption compounded the long-term decline in demand for fluid milk products. USDA reported that the combination of these supply and demand factors was responsible for the low farm prices that occurred during 2002 and 2003. More recently, however, supply and demand conditions have changed to produce record high farm prices in 2004. For example, in response to low prices during 2002 and 2003, some farmers began to cut back on production by reducing the sizes of their herds. However, with the relative price inelasticity of milk supplies, one academic expert noted that it can take 12 to 18 months to achieve a supply response to low prices. During this time, the identification of a cow infected with mad cow disease in Alberta, Canada, in May 2003, led to a temporary U.S. ban on imports of Canadian beef and cattle. This ban included live animals, some of which would have been used as replacement cows in U.S. dairy herds. While some beef imports have resumed, USDA has not lifted restrictions on imports of live cattle. Consequently, in June 2004, a report by USDA’s Economic Research Service noted that with relatively few expansions in late 2003 and the tight supplies of replacement cows, few dairy farmers could increase production in response to rising milk prices, a response that usually limits price increases. Another factor in reducing milk production has been the lower amount of bST—a hormone used in milk production—available to U.S. farmers. USDA reported that about 2 percent of the U.S. milk supply can be attributed to the use of bST. However, in January 2004, Monsanto, the maker of the hormone, announced that its customers would receive only half their normal supply. This reduced availability began March 1 and is expected to continue through the end of 2004. Additionally, drought conditions in recent years have led to higher feed costs and have negatively affected the quality of the feed. Finally, some sources identified the National Milk Producers Federation’s Cooperatives Working Together program as another factor leading to reduced raw milk supplies. Since the program began in July 2003, cooperatives have tried to reduce raw milk supplies by eliminating some dairy herds, decreasing production, and increasing exports. USDA estimates indicate that from January through June 2004, milk production in the top 20 dairy producing states averaged about 1 percent below production levels during the same period in 2003. While these factors combined to reduce the available supply of raw milk, dairy experts indicated that demand for manufactured milk products has recovered during 2004. In part, they cited a general economic recovery as contributing to this increased demand. They also indicated that people have returned to consuming more food away from home, as they did prior to September 11, 2001. This recovery in demand, coming at a time of reduced milk supplies, pushed farm prices to record high levels in April and May of 2004. For example, USDA’s market-based announced minimum price for milk to be used in manufactured products, such as cheese, was $19.66 and $20.58 per hundredweight in April and May of 2004, respectively. These prices compare with $9.41 and $9.71 per hundredweight for April and May of 2003. More recently, these high prices have started to moderate; the comparable announced minimum price for June 2004 was $17.68. However, USDA has estimated that average 2004 farm-level prices will be more than $3 per hundredweight higher than they were in 2003. A complex system of programs and policies influences the price of raw milk used to produce fluid milk and manufactured products. USDA’s milk marketing orders, as well as some states’ dairy programs, attempt to stabilize milk marketing conditions by establishing minimum raw milk prices and other marketing rules, thus, these programs assist individual farmers and dairy cooperatives, which lack the market power of other entities such as wholesale milk processors. USDA’s price support program attempts to ensure that farm prices do not fall below a minimum level, and, together with the Milk Income Loss Contract (MILC) program, provides a safety net for individual farmers during periods of low prices. These programs and other federal dairy policies operate in a broader context of trade restrictions, which can limit competition from imported dairy products and maintain U.S. prices above average international market prices. In 2003, the price of about 67 percent of the fluid grade milk marketed by dairy farmers in the United States was regulated under the FMMO program, created in 1933 and administered by USDA. Under this program, USDA uses national dairy market price information to set the minimum prices that must be paid by processors for raw fluid grade milk in specified marketing areas, or orders. Figure 32 shows a map of the current 10 FMMOs. Under the FMMO program, USDA has a classified pricing system for setting minimum prices for milk on a monthly basis, based upon the intended use of the milk, as shown in table 27. While there is some variation among the methods used for setting prices in different orders, in general, FMMO class prices are determined by formulas with milk component values derived from wholesale dairy product prices. For example, Class III formulas use weekly average butter, cheese, and dry whey prices to determine values on a monthly basis for butterfat, protein, and nonfat solids. The Class IV formulas use weekly average butter and nonfat dry milk prices to determine values on a monthly basis for butterfat and nonfat solids, respectively. The Class II price is determined by adding a fixed amount—a Class II differential of $0.70 per hundredweight—to the advanced Class IV skim milk value, while the Class I price is determined by adding a Class I differential to the higher of the advanced Class III or IV skim milk values. The Class I differentials vary by order. These differentials were, and to some extent remain, designed to represent the cost of transporting milk from areas with a surplus—traditionally the Upper Midwest region—to areas with a deficit, when necessary to meet the demands for fluid milk products. Because these differentials vary among orders, Class I prices differ from one marketing order to another. Dairy farmers selling raw milk to manufactured or fluid milk processors regulated by an FMMO receive an average, or “blend,” price that is the weighted average of the prices of Class I through IV milk. The weights are determined by the amount of milk sold in each class in the marketing order. The average price farmers receive, therefore, depends in part on the extent to which the total raw milk supply in a specific order is used to make fluid milk, as opposed to the three classes of manufactured products. Dairy farmers located in a milk marketing order sometimes ship their milk to another order to obtain a higher price. If the farmer meets the receiving milk marketing order’s shipping requirements, all of that farmer’s milk, not only the shipped milk, can qualify for that order’s blend price. However, farmers must consider whether the benefit of receiving a higher blend price outweighs the cost of transporting a sufficient amount of milk to qualify for the receiving order’s blend price. To generate the money paid to farmers, processors pay into, or draw from, a federal order “pool” based on the value of the use for which they are buying the raw milk. Fluid milk processors are required to participate in the federal order pool if they are covered by one of the federal milk marketing orders. Processors of manufactured products are not required to participate in the pool. Under the classified pricing system, raw milk used in fluid products is valued more highly. Therefore, the fluid milk processors typically pay money to the pool, while those producing other products typically draw money from the pool. This draw represents a benefit to processors of manufactured milk products for serving as a reserve supply plant for that order’s Class I market. In part, a processor’s payment or draw depends on the producer price differential, a measure of the difference between the value of that processor’s use of raw milk as determined by the market and the value if all of that processor’s raw milk were used in Class III products. In times of significant price volatility, it is possible for the producer price differential to be negative, so that some processors of manufactured products would have to pay into the pool. In such cases, some of these processors choose not to participate in the pool, or de-pool their milk, because they would be required to pay into the pool instead of receiving a draw. Some states, such as California, Maine, Nevada, New York, Pennsylvania, and Virginia, have established their own minimum farm-level milk pricing programs that cover all or portions of their states. These states have established commissions or boards to perform functions similar to those of USDA. For example, Virginia’s milk commission, created in 1934, establishes monthly farm prices to ensure dairy farmers an adequate return on their investment and to preserve market stability. Similarly, Nevada’s dairy commission, established in 1955, sets minimum prices for raw milk sold to processing facilities located within that state. The dairy price support program, established in 1949, supports farm prices by providing a standing offer from USDA’s Commodity Credit Corporation (CCC) to purchase butter, cheese, and nonfat dry milk at specified prices. The prices offered for these dairy products are intended to provide sufficient revenue so that processors of these products can pay farmers, on average, a legislatively set support price for raw milk. Since 1999, the support price has been set at $9.90 per hundredweight. By offering to purchase as much butter, cheese, and nonfat dry milk as processors offer to sell at specified prices, the price support program sets a floor on the price of these commodities and, thus, indirectly on the raw milk used to produce them. Because processors are not required to sell to the CCC and milk processing costs vary, farmers may receive prices that are either above or below the support price. However, manufactured product prices generally will not fall below the floor for very long. Also, because the price for raw milk used for fluid purposes under the FMMO program is based in part on the price of raw milk used for manufacturing purposes, the price support program indirectly influences the price that farmers receive for raw milk used for fluid purposes as well. The Secretary of Agriculture can adjust—or tilt—the related CCC purchase prices for butter and nonfat dry milk and still achieve the target support price for raw milk used in manufactured products. These products are considered joint products manufactured from the same 100 pounds of milk. Therefore, by increasing the support price of butter while lowering the support price of nonfat dry milk, or vice versa, USDA is able to adjust the CCC purchase prices, while maintaining the overall support price. The ability to adjust the relative purchase prices of these products is important for correcting imbalances in the CCC’s purchases of milkfat (butter) and nonfat solids (nonfat dry milk). Failure to correct for such imbalances can create an incentive for farmers to expand production and may alter the flow of milk to alternative uses. The 1990 Farm Bill authorized the Secretary of Agriculture to adjust the tilt twice annually to limit the accumulation of significant government stocks of certain commodities. As market prices rise, the support program allows the CCC to release its commodity stocks if the market price for a particular commodity exceeds that commodity’s purchase price. In this respect, the program helps to decrease volatility in milk prices with regard to high-price periods as well as low-price periods. In 2002, the MILC program began to provide countercyclical payments directly to farmers during periods of low prices. The MILC program provides support to farmers when the price of Class I milk in Boston falls below $16.94. MILC payments are equal to 45 percent of the difference between $16.94 and the lower Boston Class I price. Farmers in all regions of the country have access to payments under this program, but only 2.4 million pounds of milk per farm are eligible for payments during each federal fiscal year. Farmers may choose the month that they begin accepting their payments. This discretion may enable farmers producing more than 2.4 million pounds of milk per year to target their MILC payments during the lowest-price periods of the year to maximize the MILC payments they receive before reaching the cap on eligible production. According to some government and academic experts, trade restrictions have the greatest effect of any federal policy on farm milk prices. Trade restrictions maintain U.S. prices above average international market prices by restricting the amount of imports, particularly of manufactured dairy products, that enter the country. In other countries, costs of production may be lower, or exports may be more heavily subsidized, possibly allowing these countries to export products to the United States at competitive prices. Thus, without trade restrictions, manufactured products from these other countries might enter the United States in greater quantities. This increased supply of manufactured products would be expected to decrease the demand for domestic raw milk and lead to lower farm prices. Without these trade restrictions, other dairy programs, such as the price support program, might not be feasible because lower manufactured product prices resulting from international competition could trigger an increase in purchases by the CCC, which could render the program prohibitively expensive. The primary U.S. international trade restriction is the tariff-rate quota, which is the primary international trade restriction allowed under current international agreements. USDA’s Foreign Agricultural Service uses licensing to administer a tariff-rate quota system for most dairy products. Under tariff-rate quotas, a low tariff rate applies to imports up to a specified quantity, and a higher tariff rate applies to any imports exceeding that amount. These higher over-quota tariff rates generally limit trade to within quota levels. Quota rates and quantities vary by product. Another aspect of U.S. trade policy that affects farm prices is the Dairy Export Incentive Program (DEIP), an initiative that aims to help exporters of certain U.S. dairy products–specifically, nonfat dry milk, butterfat, and various cheeses–meet prevailing world prices for targeted dairy products and destinations. A major objective of the program is to develop export markets for dairy products where U.S. products are not currently competitive. Under the program, the Foreign Agricultural Service pays cash to exporters as bonuses, allowing them to buy dairy products at U.S. prices and then sell them abroad at lower international prices. DEIP could affect farm prices primarily by increasing demand for dairy products through export subsidies. According to a 2002 report by the Congressional Research Service, past studies have indicated that DEIP subsidies have at times enhanced farm prices; for example, these studies indicated that DEIP subsidies enhanced farm prices by $0.30 to $0.50 per hundredweight in 1992. Additionally, in May 2003, the National Milk Producers Federation testified that the subsidies for 5,000 metric tons of butterfat provided by DEIP in March 2003 increased wholesale butter prices by an estimated $0.06 per pound. This price increase boosted farm income by between $20 million and $30 million. DEIP can also help lower government costs by reducing the amount of product purchased under the price support program to the extent that savings in the price support program exceed the costs of subsidies. Given recent market conditions, DEIP has primarily been used to encourage exports of nonfat dry milk and cheese, and for the most part, from 1998 through 2002 the program supported exports of these products to the maximum extent allowable under international trade commitment limits. Milk reaches the consumer through a variety of pathways; however, most milk produced by dairy farmers in the United States is marketed through dairy cooperatives. Dairy cooperatives can either sell, or arrange the sale of, raw milk purchased from farmers to wholesale milk processors, or they can process it into fluid and manufactured milk products and distribute them to retail outlets. As part of sales to wholesale milk processors, cooperatives negotiate with the processors for over-order premiums, which represent the difference between the prices charged to the wholesalers and regulated minimum prices, in areas with federal or state marketing orders or regulations. The difference between the price at which cooperatives sell raw milk to wholesale fluid milk processors and the farm price for fluid milk is influenced by the costs of services that cooperatives provide to their members and to their buyers, the relative market power of cooperatives and fluid milk processors, and the effects of collective action taken by dairy cooperatives in marketing their members’ milk. Over-order premiums, in part, compensate cooperatives for the services they provide to their members and on behalf of their members to wholesalers. Some distinctive features of cooperatives include member ownership and control, at-cost services for members, and distribution of income to members on the basis of patronage. Farmers join dairy cooperatives to guarantee a market outlet for their milk, to gain bargaining power to obtain the best price in the market, to have their milk marketed efficiently, with the assurance that their milk will be accurately weighed and tested, and to be effectively represented in legislative, regulatory, and public relations matters. Most dairy cooperatives require farmers to sign a 1-year membership agreement that commits them to market all their milk through the cooperative. Cooperatives operate like corporate businesses to perform services for their members. For example, Dairy Farmers of America, the largest dairy cooperative in the country, serves almost 23,000 members, producing about 21 percent and marketing about 33 percent of the milk in the United States. According to the cooperative’s Web site, the cooperative provides a variety of services to its members, including the following: insurance—medical programs, dental/vision plans, and life insurance available to members via a milk check deduction; direct deposit—direct deposit of members’ milk checks, ensuring that farmers’ pay checks will be available within 24 hours of the pay date; forward contracting—a marketing service that allows members to protect themselves against price volatility by locking in the future sale price of their milk several months before it is produced; and financing services—loan packages for cattle, equipment, and operating expenses. In some cases, dairy farmers pay on a per-use basis for the services they receive. However, cooperatives may also try to offset the costs of their services through negotiations with wholesale milk processors for over- order premiums. Over-order premiums also compensate dairy cooperatives for a number of services that they provide to fluid milk processors on behalf of their members. Generally, these services include (1) transporting milk from different milk-producing areas, (2) scheduling—or balancing—milk deliveries to coincide with demand, and (3) standardizing the component content of milk deliveries. Different cooperatives also provide additional services for fluid milk processors. For example, one cooperative we contacted noted the rigorous quality control procedures it performs on its members’ milk. According to the cooperative official, these efforts allow the cooperative to market its members’ milk as better quality, potentially helping the cooperative negotiate higher over-order premiums. Officials from another cooperative said that a major component of the costs of services provided by cooperatives is balancing the delivery of raw milk supplies to processors’ plants. At certain times processors’ plants have surging demand for raw milk, while at other times the plants are empty. In addition, supply disruptions, such as labor strikes, create significant balancing disruptions. In this environment, few, if any, fluid milk processing firms have the capital (plants to make cheese and other products during periods of low fluid demand) to assume the risks inherent in balancing, and so in most cases this responsibility is met by the cooperatives. Historically, farmers produced and distributed fluid milk as well as some manufactured products. Milk is a highly perishable product that is bulky to transport. Traditionally, this left farmers dependent on local markets for the sale of their milk. The role of dairy cooperatives developed as farmers faced greater demand for fluid milk and dairy products and the number of farmers who processed and distributed their own milk products declined. Instead, specialized firms began taking on the role of processing fluid and manufactured milk products and marketing them for sale to consumers. However, in this environment, there were many more farmers than processors, so processors had the opportunity to bargain with different farmers to obtain a lower price for their raw milk supplies. In this situation, farmers were at a disadvantage. Consequently, cooperatives took on the role of collecting raw milk from farmers and distributing it to processors. By doing so, cooperatives helped to balance the bargaining power between farmers and processors. The 1922 Capper-Volstead Act provides limited antitrust immunity to cooperatives that meet certain requirements under certain conditions and gives farmers an opportunity to work together in setting raw milk prices, including bargaining for market premiums. Thus, over-order premiums, in part, reflect market power acquired by cooperatives relative to processors. Since our June 2001 report, the concentration of dairy cooperatives has increased, with the potential effect of enhancing their market power in negotiations with processors. In 2001, we reported that 83 percent of the milk produced in the United States was marketed by cooperatives. However, USDA recently reported that in 2002, the share of milk sold to processors and other distributors by cooperatives reached 86 percent of all the milk produced in the United States. Cooperatives attained this market share despite a 13 percent decrease in the number of dairy cooperatives between 1997 and 2002. During this time the amount of member-produced milk marketed by the eight largest dairy cooperatives grew from 52 to 63 percent of the total volume of milk marketed by cooperatives. This translated into an increase from 42 to 52 percent of the total volume of milk produced in the United States. A number of dairy experts cited the need to offset gains in market power made by increasingly concentrated firms at the wholesale processor and retail levels of the milk marketing chain as a key factor in the continued concentration of cooperatives. The greater the percentage of the milk supply that a cooperative markets, the greater its ability might be to obtain higher over-order premiums in negotiations with wholesale processors. On the other hand, one academic source questioned the extent to which increased concentration is enhancing the market power of dairy cooperatives, particularly over the long term. He noted that although Dairy Farmers of America has been consolidating its control over milk supplies in some regions, farmers and cooperatives have been able to command larger over-order premiums in the East and Upper Midwest regions—where the cooperative’s presence is not as strong—than in the West, where milk supplies have been increasing. Other sources noted that competition still exists among cooperatives and independent dairy farmers and that this competition prevents even larger cooperatives from obtaining excessively high over-order premiums. Another factor in determining the over-order premiums received by cooperatives for raw milk is collective action taken by cooperatives. Cooperatives work together to try to set prices by coordination allowed under the protection afforded by the Capper-Volstead Act. For example, officials with Dairy Farmers of America said that a major factor in the price of milk at the cooperative level is the action of marketing agencies composed of cooperatives. Marketing agencies behave like cartels and announce prices for their cooperative members. In most cases these agencies set prices for raw milk used in fluid milk and other products. Most of the prices announced by the marketing agencies represent the minimum federal order prices; additional charges may be added representing the costs of services provided by the cooperatives to the processors. Representatives of Dairy Farmers of America said that there are marketing agency agreements in most major markets except in the Pacific Northwest and that for the most part, cooperatives participate in marketing agencies. They further stated that the use of marketing agencies has become more common in recent years. The marketing agencies may also market milk for independent farmers. The officials noted that while cooperatives and independent farmers can choose not to participate in the marketing agencies, experience has shown that as more producers choose to market milk outside the system, the marketing agencies face significant competition and prices fall. Eventually, if the prices get low enough, the producers have an incentive to work together again. In an alternative type of collective action, three cooperatives—the Dairylea Cooperative, Dairy Farmers of America, and St. Albans Cooperative Creamery—established a milk marketing organization called Dairy Marketing Services. According to a Dairy Marketing Services official, the organization was formed because the cooperatives realized that they needed more market power to compete with increasingly concentrated processors and retailers. Cooperatives such as Dairylea, or individual farmers, establish contracts with Dairy Marketing Services to market their milk. Dairy Marketing Services markets about 16 billion pounds of milk annually for farmers in the Northeast area that extends from Maine to Maryland, and includes a small area in Ohio. The official estimated that this quantity represents about 45 percent of the milk marketed in the Northeast and is produced by some 10,000 to 11,000 farmers. The Dairy Marketing Services official stated that the organization has been able to carve a niche for itself in the milk marketing chain by convincing processors that it is more efficient for them to have Dairy Marketing Services arrange to have raw milk transported from the farm to the plant and allow the processors to focus on processing milk. As a result, Dairy Marketing Services has been able to obtain contracts from a number of major processors in the Northeast, including Dean Foods, Crowley Foods, and Kraft, to ensure an adequate supply of milk for their plants. Additionally, Dairy Marketing Services provides specialized services for farmers such as health insurance and workmen’s compensation, a livestock purchasing service, and risk management operations for farmers engaged in forward contracting. Although we were unable to confirm the effects that Dairy Marketing Services’ efforts have had, the official stated that the organization has provided higher over-order premiums and lower transportation charges for its participating cooperatives and farmers than would have otherwise been the case. The difference between the price at which wholesale fluid milk processors sell fluid milk products to retail firms and the price they pay for raw milk is influenced by changes in input costs, such as fuel, labor, packaging, transportation, and capital expenses. These costs, in turn, are affected by recent innovations that have increased efficiency and lowered costs of fluid milk processing, as well as by the level of service that fluid milk processors provide to retailers. For example, in addition to shipping the products to retailers, some wholesalers provide in-store services, including unloading the milk on the store dock, restocking the dairy case, and removing outdated or leaking containers. The difference between what fluid milk processors pay for raw milk and the wholesale price they charge their retail customers is also influenced by continued structural change in the fluid milk processing industry, including a steady increase in firm consolidation and the market share of some firms. While there have been many reasons for these trends, the effects on the market and fluid milk prices at this level are unclear. Several fluid milk processors stated that the cost of raw milk, and, in particular, the federal order minimum price, was the single most important influence on wholesale milk prices. We estimate that the price of raw milk ranges from about 60 to 70 percent of the wholesale price of 2 percent milk. As such, the wholesale price that processors charge would be directly linked to the Class I federal order price on a year-to-year basis, with adjustments for over-order premiums and other inputs. However, a variety of other input costs can also affect the price at which fluid milk processors sell fluid milk products to retailers. Some sources indicated that costs of inputs other than raw milk have been increasing in recent years. As one executive of a milk-processing firm explained, the primary input costs, apart from raw milk, include labor and energy. A 2002 study examining changes in fluid milk processing plants located in the state of Maine found that total processing costs rose at an annual rate of about 2.4 percent (adjusted for inflation) from 1993 through 2000. The study indicated that economywide wage inflation plus a dramatic increase in health care premiums paid by employers drove labor costs above the costs of other inputs, such as land and building expenses and plant supplies. Equipment costs increased 10.9 percent per year with investments in plant automation and greater reliance on information technologies. Also, fuel costs increased by 4.6 percent per year, reflecting economywide trends in energy costs. Moreover, while the cost of operating capital constituted only 1.0 percent of processing costs, it increased substantially during this period due to an increase in the short-term lending rate. Table 28 displays the percentage change in fluid milk processing costs in Maine reported in this study for each cost category from 1993 through 2000. Changes in the level of service that some fluid milk processors provide their retail customers have increased the efficiency of the dairy supply chain, thus potentially influencing wholesale milk prices. For example, some fluid milk processors have begun to undertake supply-chain management for their retail customers. According to a number of retailers and processors, supply-chain management commonly involves shared computer systems, which, in the vertical marketing chain, allow processors to more efficiently manage the processing and transporting of fluid milk products. One processor indicated that it uses an electronic data transfer system to manage supplies for certain retailers. In particular, this system allows the processor to contract for a certain number of loads of milk per day. Further, according to a recent presentation given by company officials, Dean Foods’ national, refrigerated, direct-store-delivery system allows it to deliver fluid milk to its customers with increased route network efficiency and without customer disruption. Dean Foods operates 129 fluid processing plants in 39 states, servicing more than 150,000 customers coast to coast via its direct-store-delivery system of more than 6,000 routes. By allowing fluid milk products to move more efficiently from the processor to the retailer, these kinds of services help to ensure quality and reduce waste and costs along the supply chain. To the extent that processors benefit from the reduced costs of supplying retailers with fluid milk products, the provision of these services could have a downward effect on wholesale prices. On the other hand, these services could provide value to retailers for which they might be willing to pay a higher price when acquiring fluid milk products. Therefore, the net effect on wholesale prices of the level of service that processors provide to retailers is uncertain. Additionally, innovations in technology can affect prices at the wholesale milk processing level. For example, changes in processing technology, such as more automated equipment, can improve the efficiency of processing operations and, to the extent that processing firms are successful at reducing their costs through innovative practices, they may be able to reduce their prices. A representative of one fluid milk processor explained that improvements in processing and packaging technology have doubled and tripled output. Also, a representative of one firm that processes milk for sale in its own retail stores stated that the firm has dedicated a large contingent of people toward the goal of reducing milk losses at its processing plants and has been successful at cutting these losses in half. He noted that a driving force behind these efforts is to try to alleviate increases in other input costs, such as labor. With innovations in technology, the fluid milk processing industry has also invested in innovative new products. By developing products with extended shelf lives, processors can potentially save shipping costs, leading to lower wholesale prices. For instance, the dairy processing industry’s collective investment in extended shelf life, ultra high temperature, and aseptic packaging technology allows fluid milk products to reach the end user more efficiently while maintaining quality. The benefits to processors and their retailers include the ability to ship these products longer distances because they are able to endure more stress than traditionally processed milk. Since the 1960s, there has been long-term structural change in the wholesale fluid milk processing industry as a continuously declining number of firms have processed an increasing average volume of milk. Structural change in the processing industry has been driven by economies of size, technological changes, high concentration at other levels of the milk marketing chain, and rapid consolidation into fewer and fewer firms. While structural change can lead to lower prices due to cost reduction from greater efficiency in production, it can also lead to higher market concentration, particularly in individual markets. In general, high and increasing market concentration can result in greater market power, potentially allowing firms to increase prices above competitive levels. Accordingly, the net impact of increased market concentration on wholesale prices can be either positive or negative. In recent years, through aggressive acquisitions of independent dairy processing plants, a handful of fluid milk processing firms have changed the market structure of the dairy industry at the wholesale level. These companies have generally pursued the business strategy of acquiring strong regional dairy processing plants so that they can strengthen their presence in existing markets, while expanding their geographic coverage to a national level. The acquisition and consolidation trend at the wholesale level has affected market structure by leading to higher market concentration for fluid milk processors in some markets. One common measure of market concentration is the four-firm concentration ratio–the percentage of sales by the top four firms in a market. According to the 1997 Census of Manufacturers, the market share for the top four fluid milk processors in the nation was about 21 percent. However, the market share for top fluid milk processors at the local level was significantly higher. For example, in our June 2001 report, we found that in Boston, Massachusetts, the market share of the top four fluid milk processors increased from 66 percent in December 1997 to 88 percent in December 1999. Since our last report in 2001 on fluid milk prices, this trend has continued, and there have been several significant mergers, acquisitions, and joint ventures that have further consolidated the industry. For example, in late 2001, Dean Foods merged with Suiza, Inc., bringing together the number one and two firms in terms of market share in the processing industry. Then, in July 2002, the Land O’ Lakes dairy cooperative sold its fluid milk operations to Dean Foods. We estimate that these acquisitions and mergers gave Dean Foods about a 27 percent market share nationally in fluid milk products in 2002. Others have estimated that Dean Foods’ market share is about 35 percent nationally and approximately 70 percent in New England. As of 2002, we estimate that the market share of the top four fluid milk processors has increased to approximately 47 percent. As seen in figure 33, with increased concentration, the number of fluid milk processing plants has gone from 1,066 plants producing an average of 50.1 million pounds of milk per year in 1980 to 385 plants producing an average of 154.2 million pounds per year in 2002. Such increased concentration of fluid milk processing firms, particularly in individual markets, can increase the price at which fluid milk is sold to retailers because market concentration can provide these firms greater market power. Thus, some analysts viewed the trend toward greater concentration in the wholesale market as a means toward greater dominance and market power in selling fluid milk to retailers. Further, they noted that increased market power can also benefit processors in their negotiations for raw milk supplies from cooperatives and independent farmers. For example, the exercise of market power could allow processors to negotiate more favorable supply contracts, which could drive down input prices and increase the spread between wholesale and retail prices. Other economists who study the causes of market concentration described a phenomenon called the “replication hypothesis”—as concentration grows at one marketing level, it is likely to be replicated at other marketing levels. For instance, high market concentration at the retail level can lead to greater concentration at the fluid milk processor level, and higher concentration in fluid milk processing can, in turn, lead to higher concentration at the cooperative level. One fluid milk processor that we spoke with stated that retail concentration has resulted in retailers preferring only one supplier, requiring a processor to have multiple plants in order to supply a retailer who serves many markets. On the other hand, increasing concentration can lead to cost savings through efficiency gains, which may be passed on to retailers in the form of lower wholesale prices. For example, some economists viewed consolidation of processing firms as a result of increasing economies of scale and excess plant capacity. That is, processors decrease their costs per gallon for items like packaging or processing costs, as they increase the amount of milk they process. One dairy analyst reported that in plants ranging from a monthly volume of 90,000 pounds per month to 30 million pounds per month processing costs decreased from about $1 per gallon to about $0.50 per gallon. In the end, the impact of market concentration on wholesale prices, either positive or negative, depends on whether market power or efficiency dominates. Three key factors that influence fluid milk prices at the retail level are retailing costs, consumer demand, and market structure. Recent increases in input costs such as labor and energy have been substantial. In an effort to hold down their retailing costs and remain competitive, some retailers are implementing supply-chain management and other innovations that increase efficiency. At the same time, consumers are purchasing a declining amount of traditional fluid milk and are increasing consumption of other beverages, such as soft drinks and bottled water. Market structure changes include continued consolidation in recent years through mergers and acquisitions among large food retailers at the national level and in many local markets, along with an increasing number of outlets that are competing with traditional supermarkets to sell fluid milk. Representatives of the Food Marketing Institute stated that after the wholesale costs of the milk, the primary costs that influence the retail price of fluid milk are related to labor and energy. They added that all of these costs have been rising recently. According to the Bureau of Labor Statistics, the average hourly earnings for nonsupervisory food store employees went from $7.56 per hour in 1992 to $10.20 per hour in 2002. These payroll costs are the largest percentage of retail operating costs, followed by the second largest single category, employee benefits such as health insurance. Table 29 shows the breakdown of supermarket operating costs in 2003 as a percentage of total sales and gross margin. Table 30 displays the sales and expense growth as a percentage of sales for the supermarket industry during the last decade, from 1993 through 2003. During this time, total employment costs increased by 12.0 percent, including a 10.7 percent increase in payroll expenses; the cost of supplies also increased by 10.0 percent. A 2003 study that was more specific to retailer costs related to fluid milk sales noted that these costs include both direct and indirect costs. Direct costs are those for electricity, labor, store equipment, and fluid milk. Indirect costs include corporate, division, and store overhead. While the study found variation in the indirect costs, such as store overhead, there was less variation across retail stores in direct costs. Increasing per unit costs have led some retailers to try to improve efficiency and reduce total costs. As mentioned in the discussion of factors influencing fluid milk prices at the wholesale level, some retailers are reducing costs by working with their wholesale suppliers to achieve supply-chain management. For example, officials with Wal-Mart noted that the firm has tried to reduce its costs and maintain its everyday-low-pricing strategy for consumers through a computerized system called Collaborative Planning Forecast Replenishment that allows processors to track stock levels at Wal-Mart locations and schedule deliveries to specific locations; direct-store-delivery of the majority of its fluid milk products to increase the efficiency of its supply chain; and changes to its shipping practices, such as not putting labels on its cases, that have allowed Wal-Mart to save time and money. Another retailer indicated that it is trying to improve the way it stocks its shelves to cut costs. A representative indicated that the retailer has invested in retrofitting its stores to use a device called a “bossy cart,” which allows store employees to move 80 gallons of milk into the milk case in one shelf-stocking. Consumer demand, driven by factors such as taste, convenience, and health, influences the retail price of milk. Moreover, since fluid milk represents approximately 3 percent of total supermarket sales, it is an important category for store performance, and retailers have an incentive to price their products competitively. However, over time, fluid milk consumption has gradually declined, with per capita demand for milk trending downward at a rate of 2 to 3 percent per year. This downward trend stems from several key factors including increasing consumption of substitute drinks such as carbonated soft drinks, juice drinks, coffee, teas, soy products, and bottled water. Also, there has been an increasing trend toward more eating outside the home, reducing the demand for fluid milk sold in food stores. Within the fluid milk category, whole milk has gone from being 92 percent of fluid milk consumed in 1960 to about 35 percent in 2001. Private labels represent the largest portion of the market, about 60 percent. More recently, however, there has been growth in the development of innovative value-added dairy products. These new innovations include dairy products for medicine/health (such as low- carbohydrate products), multipack drinks such as single-serve and vending drinks, and organic dairy products. In response to trends in consumer demand for fluid milk products, retailers from high-end supermarkets to mass merchandisers use diverse pricing strategies, and no single approach applies to any group. However, according to some retail executives, one method that retailers are currently using is category management. Using this strategy, a retailer would not focus on how much 1 percent, 2 percent, or whole milk it sells, but rather on how much is sold from the entire dairy case. Accordingly, category managers would view product assortment strategically, evaluating the performance of entire groups of related dairy products. The goal is to maximize the sales for the entire category, which requires continual adjustment to match consumer demand. To accomplish this goal, managers may feed scanner data and other market information into computer models that make product assortment decisions. A related issue influencing the retail price of fluid milk is the price elasticity of demand, that is, the sensitivity of fluid milk consumption to changes in price. For many years, empirical studies have indicated that milk prices were very price inelastic, meaning that there was little change in demand in response to a change in price. Most studies suggest that overall, the demand for milk is still price inelastic. However, some recent studies suggest that the demand for milk is not as price inelastic as it was previously. Moreover, some researchers have found that for many fluid milk products, demand is elastic, or that there is a greater change in demand relative to a change in price for certain types of milk. One study reported that price elasticities varied considerably by container size, type (such as white or flavored), and fat content of the milk. For instance, the study found that the demand for whole milk, skim milk, and low fat milk in half-gallon containers was price elastic. This research also suggested that carbonated soft drinks are the chief substitute or competitor for fluid milk products, while water is a complement in consumption. In another study, researchers found that the elasticities of demand for skim/low fat and whole milk brands are different. Demand for skim/low fat milk was found to be more price elastic than demand for whole milk, suggesting that retailers could increase overall fluid milk sales by lowering the prices of skim/low fat milk relative to prices for whole milk. As with the fluid milk processing industry, there have been trends of increasing consolidation and concentration at the retail level during recent years, especially among retail firms in some individual markets. Structural change and increased consolidation at the retail level of the milk marketing chain could lead to lower retail prices as individual retailers experience increased efficiencies in their operations. On the other hand, high levels of concentration can result in greater market power, potentially allowing firms to increase market prices above competitive levels. Also, greater market concentration at this level could increase a retailer’s buying power with fluid milk processors, potentially lowering costs. Depending upon whether these lower costs are passed on to consumers, this can either lower retail milk prices or increase the spread between wholesale and retail prices. According to USDA, since 1996, almost 4,700 supermarkets, representing $75.5 billion in sales, were acquired by other firms. Major mergers and acquisitions that have occurred in the retail food market in recent years include the following: In 1998, Kroger, the nation’s largest supermarket chain, acquired Fred Meyer, and Albertsons acquired American Stores, the second-largest at that time. In 2000, Delhaize America, operator of the Food Lion chain of stores, purchased Hannaford Brother’s Shop ’n Save supermarkets in New England to become the eighth-largest food retailer at that time. In 2001, Kroger purchased supermarkets in Oklahoma and Texas from Winn-Dixie. In 2001, Safeway made several acquisitions including Genuardi’s Family Market stores (Pennsylvania, New Jersey, and Delaware), Randall’s food markets (Houston, Texas), and Dominick’s supermarkets (Chicago metropolitan area). In 2004, Albertsons, the third-largest U.S. food retailer, purchased Shaw’s, the eleventh-largest at the time. In June 2001, we reported that, for the 100 largest U.S. cities, the combined average market share of the top four firms increased from 69 percent in 1992 to 72 percent in 1998, with some variation depending upon the particular market area. An official of one large supermarket chain noted that because of Wal-Mart’s large presence in the market, other companies’ slices of the “demand pie” got thinner, providing an incentive to expand and buy out other companies. According to USDA data, the top four firms among all food retailers in 2003 were Kroger, Wal-Mart, Albertsons, and Safeway. Consolidation in food retail chains has led to high levels of concentration in individual metropolitan markets. Table 31 displays market concentration, as measured by the four-firm concentration ratio, in the 15 markets that we used in this report to analyze the spread between retail and farm milk prices. While this threshold varies, some economists have characterized a market with a four-firm concentration ratio of 60 percent or greater as a “tight oligopoly” or highly concentrated. In 2003, the levels of concentration varied by metropolitan market, with the percentage of the market held by the four largest firms ranging from 62.8 percent in the Minneapolis/St. Paul area to 84.9 percent in Denver, with an overall unweighted average of 73.9 percent. Moreover, for the 15 markets that we analyzed, the overall average four-firm concentration ratio for 1998 that we reported in 2001—74 percent—is comparable to the 2003 average. At the same time, the traditional dominance of supermarkets in food sales has been challenged by competition from new mass merchandisers and super centers such as Wal-Mart, K-Mart, and Target. These retailers tend to offer lower prices and often purchase their inventories in large quantities to pass on low prices to consumers. According to a recent USDA report, even the larger conventional food stores do not have the same buying power as these large general merchandisers. They also tend to grow by new investment in stores rather than through mergers and acquisitions, in contrast to traditional supermarkets. Figure 34 displays the change in food sales by market segment of food retailers between 1993 and 2003. Sales from supermarkets decreased from 63 percent to 58.3 percent during this time period. However, sales from warehouse clubs and super centers increased from 3.6 percent to 9 percent, while those of convenience stores and drug stores also increased—from 11.9 percent to 13.6 percent. Overall, sales from nontraditional food retailers—warehouse clubs and super centers, mass merchandisers, and convenience and drugstores—went from 17.2 percent in 1993 to 24.4 percent in 2003. Within the fastest-growing segment, warehouse clubs and super centers, the largest food retailer is Wal-Mart, followed by Target and Meijer’s, while the second fastest growing segment includes the major drug chains, such as Walgreen’s and CVS. As of 2003, Wal-Mart super center sales reached $103.2 billion, with estimated grocery sales of $41.3 billion. According to a recent ACNielsen study, while all U.S. households still shop in traditional grocery stores, the annual number of trips to such stores continues to decline. In contrast, super centers have shown strong gains in household penetration as well as gains in the number of trips per year. In dairy, however, conventional food stores still offer a larger selection of milk products. A recent study by researchers at Cornell and Oklahoma State Universities on dairy case management found that the number of milk products offered was highest in supermarkets (74) and lowest in drug stores (16). While the volume of milk products was highest for mass merchandisers, the number of products (24) was similar to convenience stores (22). The authors explained that historically, mass merchandisers concentrated on moving a large volume of product with a limited variety. This appendix summarizes the findings of 14 economic studies of price transmission in U.S. fluid milk markets. These studies estimated the extent to which price changes at one level, such as the farm level, are transmitted to other levels, such as the retail level, and the time in which these price changes are transmitted. Many of the studies found a difference, or asymmetry, in either the extent or speed of price transmission, depending on whether the initial price change was an increase or a decrease (see table 32). Some of the studies analyzed possible causes for price asymmetry and often identified the presence of noncompetitive markets as a contributing factor. Although most studies estimated how prices are transmitted from the farm to the retail level, a few also estimated how price changes are transmitted from the retail level back to the farm level. How prices are transmitted within the milk marketing chain is important to policy makers because it affects both farmers and consumers. Farmers may be concerned with price transmission because they may believe increases in retail prices are not fully passed back to the farm level, while decreases are passed on. Consumers and farmers may believe that decreases in farm prices are not fully passed along to the retail level, while increases are passed on. Figure 35 illustrates price transmission through the vertical milk marketing chain; the arrows show how price signals are transmitted in both directions between marketing levels. The first section of this appendix is a detailed table summarizing the models, data, and key assumptions used in each study, and each study’s results. The second and third sections discuss the farm-to-retail results, including evidence of price asymmetry, with respect to the extent of price transmission and the speed of price transmission. The fourth section discusses the retail-to-farm results on price transmission. The last section discusses the studies’ findings regarding factors that might cause price asymmetry. While most of the studies summarized in table 33 use, as a basis for their models, the standard Houck (1977) and Kinnucan and Forker (1987) models to identify price asymmetry, others use newer methods such as the error correction model, which some researchers believe provides a more appropriate specification for examining asymmetric price transmission. Most of the studies estimated only “forward” price transmission, or price transmission from the farm to the retail level, but we also report on one study that estimated “backward” price transmission, from the retail level to the farm. The studies also differed in whether they estimated short-run or long-run price transmission asymmetry or both. We take all of these differences into account in interpreting the studies’ overall conclusions and discussing their results. On both national and regional or citywide levels, the majority of the fluid milk studies that we identified found evidence of farm-to-retail price transmission asymmetry in price levels. While the studies estimated a wide range of price transmission levels, in general, the estimates of price transmission for initial farm price increases were greater than for farm price decreases. The fluid milk price transmission studies that we identified using national aggregate data estimated a wide range of price transmission levels and generally found evidence indicating asymmetric price transmission—farm price increases were more fully transmitted to retail prices than farm price decreases. Using national average farm and retail prices for whole milk, Emerick (1994) and Wang (2003) developed two similar studies that identified the degree of price transmission nationally. While both studies used similar models, the Wang study used somewhat more recent data. Taken together, both studies’ short-run results suggest that about 45 percent to 94 percent of farm price increases were passed along to the retail level, while only 31 percent to 41 percent of farm price decreases were similarly passed along. These studies estimated that in the long run, transmission levels for price increases ranged from 83 percent to 107 percent, while transmission levels for price decreases ranged from 55 percent to 64 percent. Both researchers found price asymmetry in the long run, but Wang also found price asymmetry in the short run. In his study of seven metropolitan markets across the country using data from 1994 to 2003, Capps Jr. (2004) also identified price asymmetry in a majority of selected fluid milk markets for whole and 2 percent milk. Measuring the level of price transmission by using the elasticity of price transmission, he estimated elasticities for farm price increases ranging from 0.23 to 0.58 for whole milk and 0.11 to 0.46 for 2 percent milk. For farm price decreases, these elasticities were much lower, ranging from –0.02 to 0.12 for whole and –0.06 to 0.25 for 2 percent, respectively.7, 8 Similarly, in Linkow et al. (2004), using an asymmetric friction model of 10 metropolitan markets across the country, the authors found evidence of price asymmetry—retail prices were more responsive to cooperative farm price increases than decreases. Many studies that used regional or city-level data also estimated a wide range of price transmission levels as well as asymmetric price transmission. In an econometric model for whole, skim, 1 percent, and 2 percent milk, Carman and Sexton (forthcoming 2005) estimated farm-to- retail price transmission for nine metropolitan markets in the Western United States. Estimated levels of price transmission for all types of fluid milk combined for the California markets ranged from 56 percent in San Diego to 122 percent in Sacramento for price increases, and from 49 percent in San Francisco to 110 percent in San Diego for price decreases. For the non-California markets, for both increases and decreases, the estimated levels of price transmission were much lower, ranging from 3 percent in Portland to 72 percent in Seattle for price increases and 6 percent in Phoenix to 83 percent in Salt Lake City for price decreases. The authors also noted that none of their price transmission parameters for all types of fluid milk for Portland was statistically different from zero, indicating no evidence that retail prices responded to farm price changes in this market. The elasticity of price transmission from the farm to retail level is the percentage change in the retail price of a product due to a 1 percent change in the corresponding farm price. While these elasticity estimates are ones obtained from using the Houck method, the author found similar results, although smaller, using a time-series, cointegration approach: the error correction model (see table 33). transmission of 68 percent for farm price increases for the Boston market. Within the same study, the authors also note that Cotterill (2003), in other research for Boston, estimated a pass-through rate of between 20 and 26 percent for price decreases, suggesting price asymmetry in this market. In another study of the Boston market, using a two-stage market channel model, Dhar and Cotterill (2002) found that the firm-specific price pass through rate was 32 to 47 percent, while industrywide pass through was 88 to 100 percent for price increases. In yet another study of the Northeast market, Romain et al. looked at price asymmetry before and after the imposition of the New York price-gouging law in 1991 and tested for price asymmetry using the elasticity of price transmission. Before the price-gouging law, they found that a 1 percent increase in the farm price translated into a 0.70 percent and 0.62 percent increase in retail prices in New York City and Upstate New York, respectively, while a 1 percent decrease translated into a 0.30 percent and 0.49 percent decrease in these markets. After the law went into effect, they found that a 1 percent increase in the farm price translated into a 0.52 percent increase in retail prices in both New York City and Upstate New York, while a 1 percent decrease translated into a 0.43 percent and 0.51 percent decrease in these markets, respectively. Therefore, long-run price asymmetry was significant in both regions prior to the price-gouging law, but remained statistically significant only in New York City afterwards, though at a much lower level. We identified fewer studies that examined the speed of adjustment and related price asymmetry, or differences in the time required for farm and wholesale level price increases and decreases to be passed through to the retail level. Hansen et al. (1994), using national aggregate price data for whole milk, estimated that it took 3 months after the wholesale price increased for the retail milk price to increase, but that it took 30 months for the retail price to adjust to wholesale price decreases. In another study using national data, Wang (2003), using a structural model, found that, for farm price increases, retail prices adjusted more quickly in the first month after the increase and more slowly in subsequent periods. Conversely, for farm price decreases, the speed of price adjustment was slower in the initial month and increased in the following months, implying speed of adjustment asymmetry. Nearly all of the studies of regional or metropolitan price transmission found asymmetry in timing—the price adjustment process for price decreases much exceeded that for farm price increases. Carman (1998) found a 1-month lag for price decreases and no lag for price increases in the California markets. In a later study, Carman and Sexton (forthcoming 2005) found that for the majority of cities they analyzed, the time lags estimated for price decreases generally exceeded those for price increases. For the four types of fluid milk, Carman and Sexton found that farm price decreases generally took from 1 to 3 months to be transmitted to the retail level, while price increases took no more than 1 month. In the California markets, the authors found that, in general, retail prices responded more quickly to farm price increases than to decreases. Lass et al. (2001) found that for the Boston and Hartford markets, retail price adjustments to rising farm prices were much more rapid than similar adjustments to falling farm prices. Lass (2004) also found evidence of slower price transmission to the retail level when farm prices were falling. For markets in the Northeast, Frigon et al. (1999) reported short-run asymmetry in price adjustment for several markets: price adjustment was complete in Upstate New York after 2 months and in New York City and the Northeast United States after 3 months. The authors concluded that short-run asymmetry seemed to be milder in Upstate New York, because it lasted for only 2 months. In the long run, after 4 months, the Northeast United States, Upstate New York, and New York City (after enactment of the gouging law) markets had fully adjusted, with only the New York City (before enactment of the gouging law) market not fully adjusting. Using data from 1971 through 1991, Emerick (1994) tested for causality in fluid milk pricing between the farm and retail levels and found that it was bidirectional. That is, the author found that for these data, in addition to farm price changes affecting retail prices, retail price changes also affect prices at the farm level. However, this specification resulted in some parameter values for retail-to-farm price transmission that were inconsistent with economic expectations. In a later study, Wang (2003) estimated results for retail-to-farm price transmission. In the short run, which was estimated to be 1 month, the author found that price increases were immediately passed on to the farm level with a level of transmission of 94 percent, while price decreases were passed on at a level of 2 percent. However, in the longer run, which was estimated as 3 months, price increases were passed through at a level of 40 percent, while price decreases were passed through at a level of 34 percent. In the long-run specification, neither increases nor decreases in price were fully passed through. Thus, while the author found price asymmetry in the short run, she found price symmetry in the long run for retail-to-farm level price changes. As in Wang’s farm-to-retail analysis, increases in the retail price were passed through nearly fully in the initial month, and then decreased substantially. However, although decreases in the retail price were not passed on initially, they were passed on at an increasing rate in the following months. Even fewer economic studies have provided evidence on what causes price transmission and price transmission asymmetry. For the U.S. fluid milk market in particular, we found few studies that examined factors affecting the extent of price transmission and price transmission asymmetry. While two major explanations are cited in the economic literature as central to explaining price transmission and transmission asymmetry— noncompetitive markets and adjustment costs—there are several others cited, including the role of government policies, spatial market competition, substitution in processing technology, asymmetric information, economies of scale, and differentiated products. However, only the presence of noncompetitive markets and the effects of government policies were examined in the studies of price transmission that we identified. In general, economic research has found that a higher degree of market power can reduce the degree of price transmission. Particularly relevant to the fluid milk market, researchers have also shown that the number of vertical stages and the extent to which a market varies from the competitive norm both influence the degree of price pass through. Of the 14 studies that we examined, only 5 explored the role of competition in combination with the degree of price transmission and price transmission asymmetry in the milk marketing chain. In particular, 2 of these studies examined market power stemming from product differentiation of different milk types. The evidence from these studies is somewhat mixed. While 1 study did not find a linkage between market concentration and price transmission, other studies using a variety of methods did find evidence of either a lack of price transmission or price transmission asymmetry in markets that also possessed a degree of market power. Carman and Sexton (forthcoming 2005), using multiple analytical techniques, found that fluid milk markets in the Western United States that displayed noncompetitive pricing also tended to lack price transmission and show price asymmetry. Using monthly data from 1999 through 2003, the authors (1) analyzed the effects of horizontal differentiation among fluid milk types by ranking milk with different fat contents for different markets based on the costs that would be predicted under perfect competition, (2) performed correlation analysis between changes in the monthly farm and retail prices of milk with different fat contents, with a lack of correlation indicating the exercise of market power, and (3) analyzed price transmission, along with the estimated price transmission coefficients, to determine competition in the market. In the first analysis, rank values for all milk types did not conform with price expectations, with the exception of whole milk for all the months in Seattle. Moreover, except in Portland, they found that the rankings of retail milk prices for whole, skim, 1 percent, and 2 percent milk provided evidence of not being based on costs, as would be expected in perfect competition. For the price correlations, the results indicated that only a few product pairs in the nine markets have a high degree of interdependence, as one would expect for close substitutes. Low correlations ranging from nearly complete independence to moderate independence for at least one pair of products were evident in each market. For instance, retail price changes for skim milk appeared independent of other milk prices in Sacramento, Seattle, Portland, Salt Lake City, and Denver. The authors explained that these correlations all indicated pricing that was inconsistent with competitive pricing. For the price transmission analysis, the estimated results differed among the California markets, depending on the city and type of milk. For instance, in certain California markets, such as Los Angeles and San Diego, farm price decreases lagged farm price increases by 2 to 3 months, depending on the product, indicating price asymmetry in timing. Some of these price coefficients were consistent with competitive pricing and others were not. However, price transmission estimates for other metropolitan regions in the West (Seattle, Portland, Phoenix, Salt Lake City, and Denver) provided stronger evidence of noncompetitive pricing, and some also indicated price adjustment asymmetry, such as Salt Lake City and Phoenix. For these markets, only 3 of the 40 estimated price transmission coefficients were consistent with perfect competition. Using another model of horizontal product differentiation, a subsequent study by Sexton, Xia, and Carman (2004) econometrically estimated the timing of fluid milk price transmission and tested for market power for four California and five non-California cities from 1999 to 2003. While the results were somewhat mixed, hypothesis tests for the cities indicating oligopoly or monopoly scenarios also displayed more gradual price transmission results than those indicating more competitive scenarios, suggesting a link between noncompetitive market structures and a lack of price transmission. Chidmi et al. (2004) estimated price transmission and market power for the Boston fluid milk market using the New Empirical Industrial Organization approach and data from 1996 through 2000. The empirical results of the model, in particular the conjectural variation elasticity, suggest that participants in this market may possess market power and that supermarkets do not ignore each other’s actions. Although the model did not account for speed of adjustment, the authors estimated a price transmission level of 68 percent, suggesting that market power is associated with incomplete price transmission. A study by Frigon et al. (1999) includes a measure of market power, the four-firm concentration ratio, in its model of price transmission for Upstate New York and New York City. However, this variable did not prove to be significant. Later, in a similar study (Romain et al., 2002), the authors explain that their results of price asymmetry prior to 1991 in New York City were evidence that middlemen in the fluid milk market were exercising market power prior to the price-gouging law. The authors found that price asymmetry decreased after the law went into effect. They acknowledged, however, that to rigorously address the issue of a noncompetitive market, an alternative market power model would have to be developed. Two studies looked at the effects of national government intervention on price transmission. Emerick (1994) and Wang (2003) both examined the question of whether changes in dairy policy, especially the reduction in the dairy price support level that began in the mid-1980s, had changed the nature of price transmission for dairy products. Both authors basically came to the same conclusions. Emerick noted that asymmetry is more likely to have occurred since 1988, adding that the greater price volatility may have caused some difficulties for retailers and wholesalers in determining the “appropriate” price. Wang, using additional data through 1997, found that reductions in the price support level tend to have a large impact on the fluid milk and nonfat dry milk price transmission relationships. In the fluid milk market, the farm-to-retail price transmission process became asymmetric, with the greater price volatility in the post- 1988 period. While the degree of price transmission increased for both increases and decreases in price, it increased proportionately more for increases than for decreases. Six of the studies examined price transmission in conjunction with other state and federal policies and programs, such as the New York price- gouging law and the Northeast Interstate Dairy Compact. Four studies, Lass et al. (2001), Lass (2004), Chidmi et al. (2004), and Dhar and Cotterill (2002), estimated price transmission while the Northeast Interstate Dairy Compact was in effect. Lass et al. found that processors and/or retailers did not fully pass through their price increases and, in fact, may have absorbed part of the cost of the Compact’s over-order premium. In his 2004 study, Lass explained that the greater variation in farm prices that occurs without the Compact would actually lead to higher retail prices because of the larger estimated impacts on retail prices of increasing farm prices than decreasing farm prices. Dhar and Cotterill disagreed and contended in their study that the risk reduction benefit from the Compact was completely overpowered by a shift toward tacit collusion in the post-Compact period. In the studies of New York markets (Frigon et al., 1999, and Romain et al., 2002) that looked at the effect of the price-gouging law, researchers found that after the law took effect, price asymmetry was not present or was present at much lower levels. Recent changes in federal dairy programs vary in their effects on policy considerations that we identified, such as farm income, milk production, federal costs, price volatility, economic efficiency, and consumer prices. A number of options have been proposed or discussed to further modify existing programs or introduce alternative policies, all of which could affect these policy considerations in different ways. The likely effects of these program modifications or alternative policies are influenced by prevailing conditions, such as high and low dairy prices, and may be different in the short and long terms. Since 2000, three major changes have taken place in federal dairy programs. First, in response to legislative requirements, the U.S. Department of Agriculture (USDA) reformed the federal milk marketing order (FMMO) system. Second, USDA adjusted the relative purchase prices of butter and nonfat dry milk under the price support program. Finally, Congress authorized and USDA established the Milk Income Loss Contract (MILC) program. These changes had mixed effects on the policy considerations included in our analysis. Reforms to the FMMO system had mixed effects on farm income, depending on the geographic location of the farmer, while the overall effects on all farmers are less clear. Because of their effect on fluid milk prices, changes in the price support program tended to reduce the level of support for farm income and reduce federal costs, but increase economic efficiency. Introduction of the MILC program typically had the opposite effects, while maintaining production. In carrying out requirements in the Federal Agriculture Improvement and Reform Act of 1996 to reform FMMOs, USDA conducted extensive research and held public hearings. Agricultural Marketing Service (AMS) officials indicated that as a result of this process USDA implemented reforms to the FMMO system in January 2000 that were consistent with the findings of its research. Its major reforms included consolidating the number of marketing orders from more than 30 to 11; changing the classified pricing structure by creating a new class for manufactured milk products, Class IV, with the “higher of” the advanced Class III or Class IV skim milk values as the basis—or mover—for Class I prices; reducing the lag between the Class I and Class III and IV price announcements; and establishing a fixed differential of $0.70 per hundredweight to be added to the advanced Class IV skim milk value in determining the price to be paid for milk used in Class II products; introducing a new product formula pricing system; and relaxing restrictions on pooling milk in some marketing orders. USDA implemented additional reforms to the classified pricing system in April 2003 that modified aspects of the Class III and IV pricing formulas. In response to the legislative requirement, USDA reduced the number of FMMOs to 11, which were typically combinations of pre-existing orders. For example, the Central Order is a combination of several smaller marketing orders in the central part of the United States. According to USDA’s final regulatory impact analysis for the order reforms, these consolidation decisions were based on structural factors such as milk movement, the number of market participants, and natural boundaries. USDA officials and other dairy experts told us that nationally, the prices received by farmers for their raw milk did not change much as a result of FMMO consolidation. One academic study reported that order consolidation probably increased the economic efficiency of the FMMO system by more closely aligning areas where raw milk is marketed by dairy farmers with areas where it is distributed as fluid milk products. Additionally, the study noted that consolidation helped to reduce the amount of market distortion created by order regulation. However, the magnitude of the effects on farm income varied among orders because, in some cases, the consolidation combined orders that had substantially different raw milk utilization rates for the manufactured and fluid products in the various milk classes, particularly Class I (fluid milk). As a result, some dairy farmers experienced higher or lower utilization of their raw milk in Class I products than they had in the past. Changes in utilization rates are significant because farmers receive a blend price for their milk based on the utilization rates for the different milk classes within an order; thus, farmers in orders where Class I utilization rates increased generally saw their incomes increase, while farmers in orders where the Class I utilization rates decreased generally had their incomes reduced. The changes in utilization rates associated with FMMO consolidation were particularly evident in the Western Order. When USDA created the Western Order, it combined the Great Basin and the Southwestern Idaho–Eastern Oregon Orders. These orders had substantially different Class I utilization rates. In 1999, the Great Basin Order had a Class I utilization rate of 51 percent, while the Southwestern Idaho–Eastern Oregon Order had a Class I utilization rate of 8 percent. When these orders were combined into the Western Order, the resulting Class I utilization rate was estimated to be about 23 percent, lowering income for the farmers in the Great Basin Order who had previously received much higher blend prices. To address this and other concerns, Dairy Farmers of America, a cooperative representing a number of farmers in the Western Order, requested that USDA hold a hearing to reform the order’s provisions. USDA made some changes based on the concerns presented at the hearing; however, the revised order provisions did not receive the two-thirds approval necessary to be adopted, and USDA terminated the order as of April 1, 2004, stating that the continuation of the existing Western Order would not be in conformance with declared policy. Elimination of the Western Order has raised concerns that increased amounts of Idaho milk, which had been pooled on the Western Order, would be pooled on the Upper Midwest Order. Based on past experience, this would reduce the Class I utilization rate and lower the blend price for Upper Midwest farmers. However, dairy experts had mixed views on whether additional orders would be terminated. In particular, one industry expert noted that it remains unclear whether farmers in the former Western Order will be able to receive higher prices for their milk without their order. Some of these farmers, particularly those that had been in the Great Basin Order, could benefit by not having to pool their Class I milk. On the other hand, one source stated that these farmers could face increased shipping requirements to pool their milk on a remaining order. Additionally, some experts stated that without FMMOs farmers and cooperatives do not have the market power to obtain high prices for their raw milk in negotiations with processors. FMMO reform changed the structure of the classified pricing system by creating a new Class IV, representing the minimum price that processors pay for raw milk used in butter, nonfat dry milk, and other dry milk powders. Additionally, the new mover of Class I prices became the “higher of” the advanced Class III or Class IV skim milk values. Use of the “higher of” mover was intended to enable fluid milk processors to attract milk from butter, nonfat dry milk, and cheese processors by helping to ensure that the blend price would exceed both the Class III and IV prices. USDA also reduced the lag period—the time between when the Class I price is announced and the Class III and IV prices are announced—from approximately 8 weeks, to 6 weeks. Class I prices are announced in the month preceding the month to which they apply, based on the “higher of” the advanced Class III and IV skim milk values. However, the Class III and IV prices that determine the price of raw milk used to manufacture these products are not announced until the Friday on or before the 5th of the month following the month to which they apply. Consequently, there is a 6- week lag between these two price announcements. Further, USDA established that the minimum prices paid for skim milk used in Class II products would be the advanced Class IV skim milk and butterfat values, plus a fixed differential of $0.70 per hundredweight. In its March 1999 regulatory impact analysis, USDA concluded that these changes would help to eliminate situations in which prices of milk used in manufactured products rise above the price of milk used in fluid milk products and thus make the Class I mover more representative of current market conditions. Academic researchers and an industry official indicated that the creation of Class IV continued disincentives that were present prior to the 2000 reforms to shift milk to its highest-valued use. Previously, separate minimum prices were established for raw milk used in manufactured products that are now included in the new Class IV. Raw milk used in the production of butter was priced under Class III, which also included cheese and other products. However, nonfat dry milk was priced in a separate Class III-A. According to AMS officials, the development of Class III-A was necessary because manufacturers were unable to sell nonfat dry milk at market prices that would allow them to pay the Class III minimum price for their raw milk. They noted that for a classified pricing system to work, the minimum class prices must be below the market clearing prices for products produced with that raw milk (taking into account the cost of other inputs to these products). However, one study found that the creation of Class IV institutionalized separate pricing for nonfat dry milk. By separating out the price for nonfat dry milk (the lowest-valued use), the classified pricing system might maintain production of nonfat dry milk even when market signals indicate that raw milk should be used to manufacture cheese as the higher-valued use. Additionally, a 2004 study sponsored by the American Farm Bureau Foundation for Agriculture (American Farm Bureau) reported that creating a separate Class IV and then basing Class I prices on the “higher of” the advanced Class III or IV skim milk values, has reduced the influence that cheese prices traditionally had over other prices in the FMMO system, and thus partially isolated Class I prices from market forces. For example, in every month from January 2000 through July 2001, advanced Class IV skim milk values were higher than advanced Class III skim milk values. However, as of 2000, utilization of milk for Class IV products across all federal orders averaged 7 to 8 percent, while Class III products accounted for about 45 percent of milk utilization. According to the American Farm Bureau study, without the advanced Class III skim milk value as the mover for Class I prices, when the Class IV price exceeds the Class III price, similar price signals are no longer received by farmers in relatively high Class I utilization markets and in high Class III utilization markets. This difference occurs because during these times farmers in high Class I utilization markets are receiving their price signals based on the high Class IV prices, which are heavily influenced by the price support program during periods of excess production and low manufacturing product prices. Therefore, farmers in high Class I utilization areas receive higher farm prices than would otherwise be the case, and higher prices encourage increased production by these farmers. However, the higher production levels of these farmers puts downward pressure on the Class III prices and causes regional inequities in farm income. Furthermore, because Class I prices are now more closely related to the level at which the price support program sustains nonfat dry milk prices, proposed changes to the price support program have become much more controversial. Prior to the 2000 FMMO reforms, Class I prices were based on the Class III price, which, as noted previously, did not include nonfat dry milk. However, with the 2000 reforms, the level of support provided by the price support program for nonfat dry milk prices directly influences the Class IV price. During periods when the Class IV price is higher than the Class III price, changing the price support program in such a way that Class IV prices are reduced will cause the Class I price to similarly fall, thus having a greater impact on the overall blend prices received by farmers. As part of FMMO reform, USDA introduced a new product formula pricing system that established minimum prices for raw milk based on milk component values for butterfat, protein, nonfat solids, and other solids. These values are derived from the wholesale prices of cheddar cheese, butter, nonfat dry milk, and dry whey as announced in weekly surveys conducted by the National Agricultural Statistics Service. The minimum prices also factor in allowances based on estimates of manufacturing costs for these products and product yield factors representing the amount of a particular product that can be manufactured from specified quantities of the underlying components. Seven of the 11 orders (primarily the Northern orders) adopted the new product formula pricing system, while the other four orders (primarily the Southern orders) use a pricing system that bases milk prices on skim milk and butterfat. During much of the time that classified pricing has been part of the federal order system, the formulas used to set minimum prices paid to farmers were based on competitive pay prices. The pay price was known as the Minnesota–Wisconsin price, and it represented the results of state surveys of competitive market prices for Grade B milk paid by manufacturing plants in Minnesota and Wisconsin. However, with a reduction in Grade B milk production, this milk was very thinly traded and the pricing series became less representative of the value of Grade A milk used for manufacturing. According to a number of dairy experts, the change from a competitive pay to a product formula pricing system that incorporates fixed manufacturing allowances has enhanced the effects of price volatility on dairy farmers. As noted in appendix V, there are a variety of input costs to the manufacturing process, including labor, energy, and capital. With product formula pricing, manufacturing allowances, which are supposed to compensate for these other input costs, and product yield factors are fixed. To the extent that changes in these other input prices are reflected in the prices at which manufacturers sell their products, fixed manufacturing allowances will allow changes in other input costs to more readily affect the minimum raw milk prices paid to farmers. Dairy experts also indicated that the fixed manufacturing allowances in the product pricing formulas reduced economic efficiency by reflecting raw milk supply and demand conditions less clearly. Moreover, one large processor stated that the manufacturing allowances in the pricing formulas are too low and do not adequately represent the costs of manufacturing. Regardless of the market price of cheese, butter, or nonfat dry milk, the fixed manufacturing allowances provide manufacturing plants with the same net returns from 100 pounds of raw milk. Therefore, when market conditions reflect higher prices for one of these products, relative to the others, manufacturers have less of an incentive to shift production to the higher-valued use because any gains they might have realized from selling a higher-priced product would be negated by the fact that their manufacturing allowance is fixed. The 2004 American Farm Bureau study noted that prior to the introduction of the new product formula pricing system, manufacturers that produced butter, cheese, and nonfat dry milk competed more aggressively for raw milk. The study found that if the prices of nonfat dry milk and butter, for example, were depressed relative to cheese prices, cheese manufacturers would attract milk away from the manufacturers of these other products. Therefore, raw milk would more readily move to its highest-valued use. Further, some dairy experts noted that the additional volatility introduced by the fixed manufacturing allowances in the new product formula pricing system, when combined with the disincentives these allowances and separate manufacturing classes create against shifting milk to its highest- valued use, might have contributed to negative producer price differentials and de-pooling. Negative producer price differentials can occur because with the 6-week lag between the Class I and Class III and IV price announcements, rapid increases in the manufactured product prices from which Class III and IV prices are derived can raise these prices above the Class I price. USDA officials noted that the change to a “higher of” mover for Class I prices and the reduction of the lag period were designed to reduce the frequency of negative producer price differentials. However, to the extent that the fixed manufacturing allowances have introduced additional volatility into the pricing system, and the disincentives created by these fixed manufacturing allowances and separate manufacturing classes have prevented raw milk supplies from moving to their highest- valued use, negative producer price differentials and de-pooling have continued. During times when the producer price differential is negative, some processors of manufactured products who normally receive a draw from their federal order pool to pay farmers instead have to pay into the pool. In such circumstances, many of these processors choose to de-pool because by doing so, they gain a competitive advantage over those that remain and have to pay into the pool. One study on FMMO pooling issues reported that since June 2003, negative producer price differentials and de-pooling have become more common. The study noted that the producer price differential in the Upper Midwest Order was negative from July through November 2003 and reached a record low level in April 2004 of $4.11 per hundredweight. For example, cheese prices in the Upper Midwest Order began to rise sharply in mid-July 2003; thus the advanced Class III skim milk value that served as the Class I mover for August did not include these higher prices. However, the Class III price that was announced in August did include these higher prices, creating a negative producer price differential. As a result, a number of cheese processors de-pooled in August, reducing the order’s Class III utilization, which is usually around 75 to 77 percent, to just 8.4 percent. Nationally, negative producer price differentials were reported for this month in the 7 FMMOs that used the new product formula pricing system. For the 11 FMMOs existing at the time, de-pooling resulted in 33 percent less milk being pooled compared to the same month in the prior year. De-pooling reduces the overall value of the federal order pool and increases differences in the abilities of processors to pay for raw milk. As a result, dairy farmers do not receive uniform prices. Farmers marketing milk with processors who are able to de-pool may receive higher prices and thus an increase in farm income, while farmers marketing milk with processors who do not de-pool may receive lower prices. According to the June 2004 University of Wisconsin study, this situation represents an inequity and is contrary to one of the stated purposes of the FMMO system: orderly marketing conditions. In some cases, the 2000 FMMO reforms resulted in more relaxed pooling provisions. AMS officials noted that when USDA consolidated the marketing orders, the pre-existing orders each had its own pooling provisions, such as minimum amounts of raw milk required to be shipped to processing plants participating in that order’s pool to qualify for its blend price or restrictions on rejoining the pool after de-pooling. AMS officials indicated that where two orders were combined that had different pooling provisions, USDA applied the more liberal pooling standard to the combined order to prevent farmers from being shut out of the consolidated order pool. According to AMS reports from 2000 and 2001, relaxed pooling provisions contributed to the pooling of more distant raw milk to receive other orders’ attractive blend prices. Pooling was easier because most of the milk that was pooled from outside individual federal orders was not required to actually be shipped to those orders. Therefore, distant farmers were able to share in an order’s blend price without incurring substantial transportation costs for shipping milk. For example, under the Upper Midwest Order’s pooling provisions, an Idaho dairy cooperative could choose to ship raw milk from some of its Idaho farmers to a processing plant that participates in the Upper Midwest Order. All subsequent milk deliveries of those designated farmers would be priced under the Upper Midwest Order, even if only one day’s production was actually shipped to the participating processing plant. Other deliveries from these farmers would stay in Idaho for processing. In 2001, USDA reported that raw milk from California was pooled on the Central, Upper Midwest, and Western Orders. However, most of this 4 billion pounds of milk was actually processed in California plants that are not regulated by the federal order system. Also during 2001, large volumes of raw milk from Minnesota and Wisconsin were pooled on the Central, Mideast, and Northeast Orders, while increasing amounts of raw milk from Idaho were pooled on the Upper Midwest Order. According to some dairy experts, the increased pooling of milk across orders has had mixed effects on farm income. Those farmers who were able to have their milk pooled on distant orders that had higher blend prices received an increase in their farm income after accounting for the costs of transporting their milk. However, farmers in the receiving orders had their farm income reduced as milk from outside the orders reduced the value of the pool that could be shared among farmers in the receiving orders. The combination of more milk pooled on these orders and constant sales of higher-valued Class I and II products decreased the weighted average value of the orders’ pools by decreasing the utilization rates of the higher-valued classes. In response to this loss in value, participants in some orders petitioned USDA to hold hearings to address relaxed pooling provisions. For example, through the hearing process, the Central and Mideast Orders tightened their pooling provisions to control the large quantities of milk from Minnesota and Wisconsin that were being pooled on these two orders. With these tightened provisions, those seeking to pool milk on the Central or Mideast Orders have to ship more milk per year or meet other requirements to become eligible to share in the receiving orders’ blend prices. A number of dairy experts indicated that these changes have significantly reduced the incentive for distant pooling on these orders. USDA made additional reforms to the classified pricing system that went into effect in April 2003, modifying aspects of the Class III and IV pricing formulas. The principal changes in 2003 were increasing the manufacturing allowance in the formula that established a price for the other solids component of milk used in Class III products; eliminating the lower bound of zero on the Class III other solids component price; reducing the product yield for the nonfat solids components of milk used in Class IV products; and altering the Class III protein formula to prevent Class III prices from being lowered by rising butter prices. The 2003 changes were partly the result of a court-ordered injunction against the implementation of other changes that USDA had proposed based on the 2000 requirement that USDA reconsider the Class III and IV pricing formulas. An analysis by researchers at the University of Wisconsin before implementation of the changes indicated that the differences from many of these changes would not be dramatic. However, the researchers estimated that the changes would increase Class III prices by as much as $0.57 per hundredweight. More specifically, the study found that the 2003 changes would eliminate the negative effect that rising butter prices were having on the Class III price. Under the prior protein price formula, a $0.10 per pound increase in the butter price would lower the Class III price by $0.04 per hundredweight. The researchers found that the revised formula instead yields about a $0.04 per hundredweight increase in the Class III price for a $0.10 per pound increase in the butter price. The study also reported that the new protein price formula would make it somewhat less likely that the advanced Class IV skim milk value rather than the advanced Class III skim milk value would consistently serve as the mover for Class I prices. Increased Class III prices would most likely benefit farmers in areas where cheese is an important commodity and where processors do not typically pay premiums in excess of federal order minimum prices. Since 2000, USDA twice adjusted—or tilted—the purchase prices of butter and nonfat dry milk as part of its efforts to administer the dairy price support program. The first tilt occurred in May 2001, when USDA reduced the nonfat dry milk purchase price by approximately $0.10 per pound (to $0.90 per pound) and increased the butter purchase price by about $0.20 per pound (to approximately $0.85 per pound). USDA adjusted the tilt again in November 2002 by reducing the nonfat dry milk purchase price another $0.10 per pound (to $0.80 per pound) and increasing the butter purchase price approximately $0.20 per pound (to $1.05 per pound). USDA took these actions because the Commodity Credit Corporation (CCC) was accumulating large stocks of nonfat dry milk, leading to high purchase and storage costs for USDA, as well as significant market distortions. As a result of the 2000 FMMO reforms, the federal order class prices and the level of support provided by the price support program for nonfat dry milk were tied more closely. Many dairy experts noted that, subsequently, tilts became more politically controversial because they can have a greater negative effect on the FMMO class prices. Lowering the purchase price of nonfat dry milk while raising the purchase price of butter decreases the overall Class IV price when market prices for nonfat dry milk are at the level of the purchase price and market prices for butter are above the level of the purchase price. When the advanced Class IV skim milk value is serving as the mover for the Class I price, this reduction in Class IV prices also reduces Class I prices. Further, because the advanced Class IV skim milk value serves as the basis for Class II prices, Class II prices are similarly reduced. This scenario occurred during both the May 2001 and November 2002 tilts. A representative of one dairy cooperative stated that these impacts were particularly pronounced in areas with high Class I utilization rates, such as the Northeast and Southeast. According to a report published by the International Trade Commission in May 2004, estimates of the actual impacts of these tilts on farm prices varied. The study presented a USDA estimate that the November 2002 tilt reduced fiscal year 2003 average milk prices from $12.10 to $11.90 per hundredweight. While USDA reported that this decrease lowered the amount of raw milk produced and thus was partially offset by an increase in butter prices from reduced production, it still led to a loss in net farm income of $192 million. Alternatively, the study reported that the National Milk Producers Federation estimated that the two tilts ultimately lowered farm prices by $0.19 per hundredweight in 2001, $0.48 per hundredweight in 2002, and $0.76 per hundredweight in 2003. With these price reductions, the organization projected that farm income would fall by $156 million in 2001, $816 million in 2002, and about $1.3 billion in 2003. Another study cited by the International Trade Commission’s report estimated that the 2002 tilt could have decreased average milk prices by $0.16 per hundredweight, reducing production by 814 million pounds and farm income by $371 million. However, that study also found that these impacts varied substantially, depending upon the assumption of high or low prices and the effects of other government programs. While the tilts reduced farm income and raw milk production, a number of dairy experts indicated that USDA’s tilts have increased economic efficiency and reduced federal costs associated with the dairy price support program. Additionally, some experts noted that by maintaining nonfat dry milk prices at artificially high levels, the price support program was inducing surplus production of nonfat dry milk. In some cases, nonfat dry milk was produced specifically for sale to the government at the CCC purchase price. From the beginning of October 2000 through the end of May 2001, the CCC purchased approximately 330 million pounds of nonfat dry milk (more than 40 percent of national production), and government purchase costs exceeded $340 million. Furthermore, in 2002, the CCC stocks of nonfat dry milk were equivalent to two-thirds of domestic production and exceeded annual domestic consumption by more than 30 percent. By reducing the purchase price of nonfat dry milk, USDA reduced the incentive to produce surplus nonfat dry milk. The International Trade Commission study reported that while production of nonfat dry milk continued to rise between 2001 and 2002, after the second tilt production declined by about 5 percent between 2002 and 2003. The tilts also helped to reduce federal costs associated with purchasing and storing nonfat dry milk. Some sources also indicated that the tilts affected the balance of trade in dairy products between the United States and its trade partners. The International Trade Commission study reported that during the majority of the period from January 1998 to November 2002, U.S. prices for nonfat dry milk exceeded international market prices by more than $500 per metric ton. Consequently, domestic manufacturers had an incentive to import alternative dairy protein products such as milk protein concentrates. By lowering the purchase price of nonfat dry milk through the tilts, USDA decreased this incentive because domestic manufacturers could obtain nonfat dry milk more cheaply. With the introduction of the MILC program in 2002, dairy farmers began receiving payments on milk production up to 2.4 million pounds annually when the Class I price in Boston dropped below $16.94 per hundredweight. MILC payments are equal to 45 percent of the difference between $16.94 and the lower Boston Class I price. From the program’s inception, MILC payments were made every month from the retroactive start date of December 2001 through August 2003 because there was an extended period of depressed farm prices, which reached a 25-year low in early 2003. Prices temporarily recovered from September through December 2003, so no MILC payments were made in those months; however, payments resumed during January and continued through April 2004. During the spring of 2004, farm milk prices reached record highs and remained strong through the fall of 2004, so no MILC payments were required for the remainder of fiscal year 2004. As a result of depressed farm milk prices during 2002 and 2003, federal costs associated with MILC payments exceeded original estimates. Based on market conditions in March 2002, the Congressional Budget Office estimated total federal costs of the MILC program at $963 million over the life of the program (i.e., about 4 years). However, 1 year later, the Congressional Budget Office revised its total cost estimate for the MILC program to $4.2 billion. USDA distributed approximately $1.8 billion in MILC payments to dairy farmers in fiscal year 2003. Thus, the cost of MILC through fiscal year 2003 alone exceeded the previously estimated total costs for the entire 4-year period through 2005 by about $800 million. The Congressional Budget Office’s March 2004 estimate for total MILC program costs was $3.8 billion, somewhat lower than the 2003 estimate because of higher farm milk prices in 2004. Many dairy experts indicated that by providing income support during low- price periods, the MILC program has helped keep some farmers, particularly smaller farmers, in business. For example, some academic experts noted that some farmers received MILC checks of about $20,000 to $25,000, and others said that despite low prices, fewer farmers exited the market in 2003 than in previous years. USDA officials indicated that these payments delayed the supply response to low prices and maintained depressed milk prices over a longer period of time. By providing direct payments when prices were low, MILC obscured market signals that would normally cause farmers to decrease production, and continued high levels of production retained downward pressure on milk prices. To the extent that these lower farm prices were passed on through the retail level, consumers may have experienced lower prices for dairy products. Despite this effect, dairy experts stated that smaller farmers receive a net benefit from MILC because those with about 100 to 130 cows can have all of their production covered under the 2.4 million-pound annual cap. In contrast, larger farmers do not receive net benefits from MILC because the negative farm income effects of reduced milk prices are greater than the payments they receive under the production cap. A couple of dairy experts noted that the break-even size, at which MILC payment benefits just offset the negative farm income effects of prolonged low prices, is about 400 cows. Because the effects of MILC vary by producer size, they also vary regionally. States with many small dairy farmers, such as Pennsylvania, Wisconsin, and Vermont, have received greater proportional benefits from MILC. However, the MILC program has disadvantaged states with larger producers, such as western states. A number of options have been proposed or discussed to further modify existing programs and policies or introduce alternative ones, all of which could affect the policy considerations we identified in different ways. These options span a range of existing and potential federal dairy programs and policies, including FMMOs, price supports, MILC, target price deficiency payments, the proposed National Dairy Equity Act, trade restrictions and export incentives, risk management, and supply management. Current international trade agreements and ongoing negotiations can have implications for certain of these policy options, such as price supports, export incentives, and trade restrictions. The purpose of this analysis is not to take a position for or against any of these options or to analyze them in terms of their overall economic impacts, but simply to discuss their likely effects on the policy considerations we identified. The likely effects of these alternatives sometimes differ under various scenarios, such as high or low prices, and may be different in the short and long terms. In general, options that increase farm income over the short term also tend to increase milk production and thus the potential for oversupply and lower average farm prices over the long term. These options also tend to be costly for the federal government during periods of low prices. In some cases options that increase the economic efficiency of federal dairy programs also increase price volatility because they allow clearer transmission of market price signals. Further, to the extent that price changes at the wholesale level are passed through to the retail level, a number of options would likely have mixed effects on consumer prices depending upon the particular product under consideration (e.g., butter, cheese, or fluid milk). The potential impacts of the options also vary according to the size of the producer and region of the country. In general, options that affect farm income without respect to farm size or cost of production could further shift production towards larger, western farms. Production shifts toward larger farms could increase the potential for oversupply in the market, because such farms have a greater capacity to increase production in response to policy incentives. In some cases, options that reduce support for farm income could have disproportionately negative impacts on smaller farmers, who often have higher costs of production. Dairy experts have cited a number of concerns with FMMOs, including that with the increasing ability to transport milk products longer distances, the differences in Class I differentials provide incentives for overproduction in some regions; that recent revisions to the classified pricing system enhanced the effects of price volatility on dairy farmers; that changes in pooling restrictions increased the flow of milk between different regions of the country, which disrupts the market; that consolidation of the FMMOs combined some areas of the country that were not part of the same natural “milksheds;” and that it takes too long to change the federal order system through the USDA hearing process. A number of options have been proposed or discussed to modify the FMMO program, including revising the classified pricing system, making administrative changes such as tightening pooling provisions, or eliminating FMMOs altogether. Figure 36 shows the effects of various options to change the FMMO program over the short and long terms. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Academic, industry, and government sources cited a variety of concerns with recent reforms in the classified pricing system, including that these changes enhance the effects of price volatility on farmers; lessen the transmission of market price signals; reduce incentives for market participants to shift milk to its highest-valued use; discourage innovation; and contribute to de-pooling during periods of price volatility. A number of options exist to revise the classified pricing system, including basing the class pricing formulas on competitive pay prices instead of product prices, combining Class III and IV into a single manufacturing class of milk, and changing the Class I mover from being the “higher of” the advanced Class III or IV skim milk values to a weighted average of these prices. Use Competitive Pay Prices Instead of Product Formula Pricing One option to reform the classified pricing system would be to return to a system of competitive pay prices as the basis for the minimum prices that manufacturers pay for raw milk. As indicated by AMS officials, the change to a product formula pricing system has enhanced the effects of price volatility on the prices that farmers receive for their raw milk. The AMS officials, as well as academic and industry sources, noted that competitive pay prices for raw milk would be a better basis for the minimum class prices. However, the officials noted that the lack of good data is a major challenge to developing a competitive pay system that is more broadly based than the old Minnesota-Wisconsin price series. In an earlier report, we compared the concept of a competitive pay system to a product formula pricing system, among other options. We found that while a product formula pricing system would be superior to other mechanisms in reflecting national prices of manufactured dairy products, the accuracy of price levels under the system would depend on a number of factors, including whether the manufacturing allowances—deductions made for the costs of manufacturing different dairy products—are accurate. Setting accurate manufacturing allowances is difficult because individual plants could have different cost structures. As noted earlier, we heard from several academic, industry, and government sources that the fixed manufacturing allowances in the current product pricing formulas have negative impacts on the economic efficiency of the classified pricing system by reflecting supply and demand conditions less clearly. On the other hand, we found that while a competitive pay system for Grade A milk was similar to a product formula pricing system in that it would generally reflect national prices of manufactured dairy products, it would more readily reflect national supply and demand conditions for raw milk used for manufacturing. Also, it would more accurately reflect competitive pressures from the fluid milk market because Grade A milk is used to meet shortages in areas of the country with high Class I utilization. Furthermore, we reported that a competitive pay system would be better than a product formula pricing system at self-adjusting automatically because the competitive system would be based on actual reported prices. Therefore, a competitive pay system could improve the economic efficiency of the classified pricing system by providing clearer market price signals. AMS officials stated that if it were possible to obtain adequate data, returning to a competitive pay system would introduce greater stability into farm milk prices because basing the price formulas on competitive pay prices allows manufacturers’ margins to be set outside the federal order system. According to the officials, when manufacturing costs increase, manufacturers tend to decrease their margins; they then increase their margins when costs go back down. The AMS officials also stated that to the extent the effects of price volatility are reduced by eliminating fixed manufacturing allowances, raw milk production would increase, holding average milk prices constant. Combine Class III and IV A second option discussed by some of the academic and industry sources we contacted would be to combine Class III and IV into a single manufacturing class. While an objective of USDA’s FMMO reforms, including developing separate manufacturing classes on which to base fluid milk prices, was to avoid situations in which the price of milk used in manufactured products rises above the price of milk used in fluid products; as we noted earlier, this change has muted market price signals and has reduced incentives to move milk to its highest-valued use. However, an academic source indicated that a potential challenge in combining Class III and IV is identifying an appropriate formula that considers the products in an expanded class. In addition, AMS officials cited this issue noting that one barrier would be finding a way to price the lowest-valued use, nonfat dry milk, so that manufacturers of this product would be able to afford to pay the minimum class price to farmers. A number of industry and academic sources said that combining Class III and IV into a single manufacturing class would allow milk to move to its highest-valued use. With a separate Class IV, processors of butter and nonfat dry milk can pay less for their raw milk supplies under certain market conditions, which can stimulate additional allocation of raw milk into Class IV products. Under such conditions, by reducing these market distortions, combining Class III and IV might help to increase the economic efficiency of the classified pricing system. It could also help to limit the decline in market prices caused by overproduction of nonfat dry milk and thus reduce the federal costs of CCC purchases of this commodity. Over the long term, limiting overproduction incentives will help all farmers by maintaining higher average farm prices. However, these benefits could be limited by constraints on the extent to which raw milk is free to move between different uses. Some USDA officials have indicated that manufacturing capacity for different products varies by region and that manufacturing plants are often specialized. In addition, fixed supply agreements within the dairy industry may not allow manufacturers significant freedom to shift raw milk between uses in response to price signals in the short term. Combining Class III and IV could have mixed impacts by region on farm income because with a combined manufactured product price, high prices for one particular use of milk would offset low prices for other uses in creating a weighted average price. Utilization rates for the different manufacturing uses vary among orders, and farmers whose average blend prices might be higher based on utilization of their raw milk for a higher- valued use under a separate class price scenario could experience lower average returns under a combined class price scenario if the price of their higher-valued use were weighted down by the inclusion of the lower-valued use. Conversely, farmers whose average blend prices might be lower based on utilization of their raw milk for a lower-valued use under a separate class price scenario could experience higher average returns under a combined class price scenario. For example, in regions with higher Class IV utilization, the loss of a separate Class IV could result in lower average farm income during periods when Class IV prices would otherwise be higher than Class III prices. In regions with higher Class III utilization, however, the loss of a separate Class IV could result in higher average farm income during periods when Class III prices would otherwise be lower than Class IV prices. If Class III and IV prices are kept separate, a third option to modify classified pricing would be to use a weighted average of Class III and IV prices as the mover for Class I prices. This option would tie fluid milk prices more closely to market related manufacturing prices, particularly when prices of Class III products are depressed relative to Class IV products. One academic study that modeled the price volatility impacts of using a weighted average of Class III and IV prices to set Class I prices reported that using the weighted average, rather than the “higher of” the advanced Class III or IV skim milk values, slightly decreases the volatility of farm prices, largely through a more substantial decrease in the volatility of the Class I price. As a result of this reduced volatility, the study estimated that the average Class I price would decline by roughly $0.40 per hundredweight, causing an average farm price decline of $0.09 per hundredweight. The study found that there would be essentially no change in the volatility of other class prices or product prices. These effects are likely to vary by region. As the Class I price is expected to decrease more than the average farm price, the effects of this option could be more significant in regions with high Class I utilization of raw milk. The reduction in farm prices could cause a marginal downward supply adjustment over the long term, the effects of which would also be stronger in high Class I utilization areas. The effects of reduced prices on farm income could be partially offset by increased MILC payments as long that program remains in existence. Increased MILC payments would raise federal costs, particularly during periods of low prices. However, this option could also help USDA minimize the costs of the price support program by making it less controversial to adjust the tilt between butter and nonfat dry milk purchase prices because these prices would no longer exert as great an influence on the Class I price. AMS officials cautioned, however, that this option could increase the likelihood that manufactured product prices would rise above blend prices, leading to more frequent negative producer price differentials and de- pooling. The officials noted that the purpose of implementing the “higher of” the advanced Class III or IV skim milk values provision was to reduce the frequency of negative producer price differentials. They said that implementing an option that reduces blend prices makes it more likely that the value of one of the manufacturing class prices will rise above the blend price. The officials also indicated that consumers may benefit from lower prices to the extent that the Class I and blend price decreases are transmitted through the milk marketing chain. One AMS official questioned the extent to which this option would reduce the volatility of Class I prices, noting that since January 2000 the volatility of Class I prices would have been about the same under a weighted average mover. Another set of options to modify FMMOs involves changing their administration. Pooling provisions could be tightened by increasing the amount of milk that must be shipped to an order to qualify for that order’s blend price or by placing restrictions on de-pooling. Alternatively, federal order reform could be reconsidered by splitting up some of the consolidated orders. Finally, USDA may be able to shorten the time between a hearing request and implementation of a final decision. Tighten Pooling Provisions with Increased Minimum Shipment Requirements Federal order reforms relaxed the pooling provisions of many orders, which negatively affected some farmers by diluting their Class I utilization rates and lowering their blend prices as increased amounts of distant milk were pooled on their orders. In response, participants in some orders called for hearings to tighten their pooling provisions, often by increasing the minimum amount of milk that must be delivered to processing plants participating in their pool to qualify for the blend price. With the end of the Western Order, the concern that some milk formerly pooled on that order could be pooled on the Upper Midwest Order led two groups of dairy cooperatives operating there to request a hearing to tighten the order’s pooling provisions. According to some AMS officials, restricting the pooling of raw milk through increased shipping requirements would not have a significant national impact. However, they said there could be mixed regional effects on farm income and production. Farmers in areas seeking to pool milk to other orders would generally see a negative impact on their farm incomes because they would incur greater transportation costs trying to share in the value of another order’s pool and would, therefore, pool less milk in other orders. However, farmers in the receiving order could see an increase in their farm income because with less milk pooled from outside the order, their pool would retain more of its value and they would receive higher blend prices. In each case, there could be localized production effects depending upon whether farmers experience an increase or a decrease in their farm income. The AMS officials stated that tightening pooling provisions would have minimal effects on price volatility because the tighter provisions would not change overall supply and demand conditions. However, the officials also indicated that the economic efficiency of the FMMO system would increase to the extent that reducing the amount of milk pooled on distant orders would reduce the amount of money spent on transporting milk. Because impacts on national production levels are likely to be limited, impacts on federal costs would be minimal. Tighten Pooling Provisions with Restrictions on De-Pooling A second option for tightening federal order pooling provisions is to place additional restrictions on those who choose to de-pool. As noted earlier, price volatility leading to negative producer price differentials and de- pooling can negatively affect those who remain in a federal order pool because the overall value of the pool is reduced. Some orders restrict de- pooling by preventing milk handlers who choose to de-pool from re-pooling for a specific period of time. One such restriction recently proposed by cooperatives in the Upper Midwest Order would limit a processor’s pooled milk in any month to a specified percentage of that processor’s pooled milk in the previous month. Under that restriction, if a processor partially de- pooled in one month, it could only partially re-pool in the subsequent month. If it fully de-pooled, it would have to wait a month before it could re-pool. Restricting de-pooling would have mixed effects on farm income. Because de-pooling allows some processors to pay farmers higher prices, some farmers would be harmed if de-pooling were restricted. Conversely, those farmers who are harmed by de-pooling could benefit if more of the pool value were retained during periods of volatile prices. While nonuniform farm prices do not help to achieve orderly marketing, restricting de-pooling could make this problem worse if it encourages some processors to leave the order system permanently. In that case, the reserve supply of milk for fluid production would shrink, and orders would have to increase minimum shipping requirements for remaining pooled processors and dairy cooperatives. If restricted de-pooling actually caused fewer processors to be associated with the federal order system, volatility in fluid milk prices could increase because with less milk available in reserve to supply the Class I market, seasonal or episodic fluctuations in milk supply and demand could have greater price impacts. However, an AMS official indicated that in general processors benefit over the long term from being pooled, and so restrictions on de-pooling would not necessarily decrease the supply of milk available for fluid milk products. If the supply of milk available for fluid milk products did not decrease, then the volatility of fluid milk prices would not increase. The effects of restricted de-pooling on other policy considerations, such as federal costs or consumer prices, are unclear. Split Up Consolidated Federal Orders A third option to revise the administration of FMMOs is to split up some of the consolidated orders where the combination of orders with different utilization rates of the different classes of milk has created problems. Many industry, academic, and government sources stated that federal order consolidation has had some significant regional effects, including the demise of the Western Order. One source stated that because of order consolidation, some orders have become so large that they include milk that would not normally be in the milkshed for particular marketing areas. This increases the potential that some farmers’ blend prices will decrease with lower utilization rates of raw milk in Class I products. To address this problem, USDA is currently considering a proposal to split up the Southeast Order and create a “Mississippi Valley” Order. Splitting up certain orders could have mixed effects on farm income. In cases in which experts have cited problems with federal order consolidation, the problems developed because two previously existing orders that had largely different Class I utilization rates were combined. With the additional milk pooled under the consolidated order, farmers that had been in the order with the higher Class I utilization rate saw a decrease in their blend price. However, farmers that had been in the order with the lower Class I utilization rate experienced an increase in their blend price. Therefore, splitting up the consolidated orders in these instances would affect farmers differently depending on the Class I utilization rates of the new orders. AMS officials indicated that splitting up orders would affect the distribution of farm income but would not affect overall production and therefore would have minimal impacts on federal costs. They also noted that splitting up orders could increase the movement of milk because having smaller orders makes farm prices more closely reflect local supply and demand conditions. Therefore, to the extent that smaller orders would increase the blend prices in some areas, this option could create incentives to transport more milk to those areas. Accelerate USDA’s Hearing Process Some experts suggested that USDA’s federal order hearing system is too slow to effectively respond to problems and changing market conditions. The American Farm Bureau study reported that it can take 2 or more years from the time USDA receives a request for a hearing or direction from the Congress before USDA implements the rules of a final decision. The study noted that within this long time frame, either the industry may have struggled under faulty rules or, by the time final rules are effective, industry changes may have occurred rendering the final rule obsolete. For example, a couple of industry sources stated that the dairy industry is changing rapidly, with a number of new products coming onto the market. They indicated that USDA would have difficulty regulating these products because it takes too long to get a decision on the class under which they would be priced. USDA faces a number of challenges in shortening the time between receiving a hearing request and implementing the rules of a final decision, while still ensuring the promulgation of economically sound regulation. The USDA hearing process is set forth by law. Before issuing or amending marketing orders, the Secretary of Agriculture must conduct a formal on-the-record rulemaking proceeding. USDA must notify the public and provide an opportunity for a public hearing and comments. Before an order regulation or amendment to a milk marketing order can become effective, it must meet certain requirements including that it be approved by at least two-thirds of the affected dairy farmers in the order, or dairy farmers who produce at least two-thirds of the milk produced in that order. If individual parties do not agree with the USDA decision, they can seek review of that decision in federal court. For example, after the 2000 FMMO reforms, several industry groups received an injunction that prevented USDA from implementing new pricing rules that would have established separate Class III and IV butterfat prices. In response, USDA issued a revised decision, but these new rules were not implemented until April 2003. Making informed decisions about changes in complex federal dairy policy can be time-consuming. For example, such decisions require thorough analysis and possibly modeling. In addition, AMS officials indicated that hearing participants are not always ready on time, and keeping a stricter schedule could result in an incomplete hearing record. Furthermore, they stated that it is difficult to compile the hearing transcripts quickly and accurately. However, they noted that USDA has recently begun evaluating its transcript contracts using an approach that considers timeliness and accuracy. The politicized nature of dairy policy makes it difficult to agree on proposed changes to FMMOs. Because FMMO provisions affect cooperatives and processors in different ways, these entities may not always agree on a proposed change. Moreover, given regional differences in production and utilization, farmers in different regions may not agree on changes in federal dairy policy. AMS officials said that delays are also caused by the lack of available judges and attorneys who deal with milk pricing issues. The officials said that increasing the speed of the decision-making process is likely to increase federal costs because more of these resources would be required. USDA officials indicated that they do not believe the hearing process inhibits the ability of FMMOs to respond to changing market conditions or the marketing of new dairy products. Rather than trying to reform the FMMO system, some dairy experts have considered the possibility of eliminating FMMOs and thus the classified pricing system. To the extent that manufactured product prices stay above the level of the price support program, market forces would set prices for all uses of milk. In a 1988 report on FMMOs, we found that the production and marketing conditions used to justify federally guaranteed milk prices under marketing orders no longer existed because most milk being produced is now Grade A and is eligible to serve the fluid milk market during periods of supply and demand imbalance. Also, our study noted that improvements in refrigeration and the transportation system have made it less expensive to rely on milk supplies from other markets. Further, we reported that the differences in Class I differentials—which were, in part, intended to represent the costs of producing and transporting milk from areas with a surplus to areas with a deficit—actually bear little relationship to differences in either production or transportation costs for milk, thereby providing incentives for overproduction in certain regions. A 2002 University of Wisconsin study argued that this overproduction has hurt farmers in areas with low Class I utilization through an overall reduction in the price of milk used for manufacturing purposes. Academic and USDA studies have generally concluded that without the classified prices established by FMMOs, fluid milk processors would likely pay lower average prices to farmers, which would decrease farm income in high fluid milk utilization areas, especially in the short term. Estimates of impacts on farm prices varied among studies. For example, the American Farm Bureau study estimated that average farm prices for raw milk would fall by about $0.50 per hundredweight during the first couple of years following federal order elimination. Another study, published by USDA, estimated that eliminating the federal order system would decrease Class I prices an average of $0.95 per hundredweight over the period from 2002 through 2007. Further, the 2002 University of Wisconsin study estimated that farm prices would decrease around $0.05 to $0.10 per hundredweight. AMS officials indicated that farmers would likely reduce production to the extent they receive lower average farm prices. The effects of FMMO elimination could be different based on farm size. Some of the academic and industry sources we contacted noted that farmers are at a disadvantage in terms of market power within the dairy industry. Without the pooling of milk proceeds and the payment of uniform blend prices, larger farms would have increasing incentives to establish contracts directly with processors, and processors would increase their efforts to procure milk directly from larger farms closer to their own plants. Smaller and more distant farms could be more likely to be bypassed. Also, to the extent that dairy cooperatives are unable to cover the costs of balancing and other services they provide, processors may be able to deflect the costs of these operations back to farmers. Studies also indicated that the magnitude of these effects could vary by region. Without classified pricing, the prices of raw milk used in fluid milk products are likely to fall, while the prices of raw milk used in manufactured products are likely to rise. As a result, farmers in regions with higher utilization of raw milk for fluid purposes, such as the Northeast, would be worse off without classified pricing, while farmers in regions with high utilization of raw milk for manufacturing purposes, such as the Upper Midwest, could be better off without classified pricing. For example, the American Farm Bureau study reported that states with less than 20 percent fluid utilization of raw milk would have higher average farm prices with the elimination of federal orders, while those states with fluid utilization of raw milk in excess of 35 percent have higher farm prices with the federal order system in place. In the short term, to the extent that lower farm prices paid for raw milk used in fluid products are passed on to consumers, fluid milk consumption could marginally increase. The American Farm Bureau study estimated a 2.5 percent increase in fluid milk demand, while the USDA study estimated a 2 percent increase. On the other hand, the University of Wisconsin study reported that the combination of less milk production and more fluid milk consumption would reduce the amount of raw milk available for manufactured products and increase manufactured product prices accordingly. To the extent that manufactured product prices increase, consumers may buy less of these products. Over the long term, increased prices for milk used in manufactured products could limit reductions in both farm income and production resulting from elimination of FMMOs and classified pricing. In fact, the American Farm Bureau study found that once a supply adjustment occurs, average milk prices would return to levels similar to those prior to FMMO elimination. However, to the extent that farmers in areas with high manufacturing use experience higher prices for their milk, the incentive to produce more milk could limit potential increases in manufacturing product prices. In the end, some decline in production could be expected over the long term because the overproduction incentive resulting from the classified pricing system would be removed. Eliminating FMMOs and classified pricing could also affect other federal dairy policies and, therefore, affect federal costs. If MILC payments were still based on the relationship between what fluid milk processors pay to acquire milk in Boston and a target price of $16.94, MILC payments would likely increase in size and frequency. At the same time, elimination of FMMOs could decrease federal costs related to the price support program because an increase in manufactured dairy product prices resulting from eliminating classified pricing could reduce the need for dairy commodity purchases by the CCC. However, it is unclear whether this effect would be large enough to offset additional payments under the MILC program. Some AMS officials also indicated that farmers could experience increased price volatility without FMMOs. In the absence of minimum class prices, greater price volatility could result, in part, from seasonal production variation or short-term factors, such as holidays or weather events. Further, while some sources questioned the extent to which a state regulatory system, such as California’s, could continue to exist in the absence of the federal classified pricing system, others indicated that state regulation of milk could increase if the federal system were eliminated. A couple of industry sources that we contacted indicated that an increase in the number of states that regulate milk could make it more difficult for them to do business and would be a less efficient system of regulation. On the other hand, eliminating FMMOs and classified pricing could also provide greater incentives for product innovation. Without classified pricing, the market would price products more openly based on supply and demand and would increase the incentives for processors to develop alternative dairy products. To the extent that alternative products generate new demand for milk, this innovation could benefit farmers. For example, a recent study by researchers at Cornell University on the assignment of new products under a classified pricing system found significant difficulties. The study reported that the assignment of a new product to a higher-priced class increases farm income in the short run; however, the incentive to increase production provided by the use of raw milk in this higher-priced class and reduced demand for raw milk stemming from these higher prices can offset farm income gains in the long run. Furthermore, the study found that whether the new product detracts from sales of existing fluid milk products could also affect whether assignment to a higher-priced class increases net revenues to farmers. One USDA representative stated that a number of new products, such as low-carbohydrate milk beverages, have recently entered the market. In some cases, according to the representative, these products are intended to compete with Class I products but are formulated to avoid regulation as Class I products. Processors seek to avoid having these products regulated as Class I products because they would then be required to pay more for raw milk. Conversely, farmers want these products classified as Class I products so that their raw milk used in these items will be priced at the higher level. Some sources suggested that classified pricing could be eliminated without eliminating FMMOs altogether. In this case, FMMOs might continue to perform functions such as pooling revenue, auditing, verifying weights and milk components, and collecting statistical information. A few dairy experts indicated that these particular aspects of the FMMO system benefit the dairy industry. While some of these functions might be picked up by the private sector if FMMOs were eliminated, they would come at a cost to dairy farmers. Retaining FMMOs while eliminating classified pricing would probably lessen the impacts of deregulating raw milk prices but would be unlikely to change the direction of most effects or who benefits. For example, while farm income would still fall without classified pricing, continuing to pool revenues through FMMOs, if possible, could help cooperatives negotiate over-order premiums because pooling could help cooperatives maintain their market power relative to processors. Thus, farm income might fall by less than it would if FMMOs were eliminated entirely. Moreover, by continuing to pool revenues, retaining FMMOs could limit increases in price volatility resulting from the elimination of classified pricing. In addition, the orders would continue to aim to ensure equitable treatment for producers and processors. Maintaining orders is not likely to change the fact that farmers in regions with high fluid milk utilization would experience greater reductions in farm income from eliminating classified pricing than farmers in regions with high manufacturing utilization. Production would still adjust downward in response to lower milk prices, and retail fluid milk prices would also decrease to the extent that lower prices for raw milk used in fluid milk products are passed on to the consumer. However, to the extent that cooperatives are able to maintain higher over-order premiums by retaining FMMOs, these production and consumer price effects might be less than they would be if orders were eliminated entirely. Effects on federal costs would still be mixed. Dairy experts have raised several concerns about the price support program in recent years, including that the support level is too low to adequately support farmers; that the program provides incentives to overproduce milk and certain commodities purchased by the CCC; that USDA has not managed the program to maintain the established support price during periods of low market prices; that there are additional costs of selling dairy products to the government, which diminish the effectiveness of the support price; and that the program stifles innovation in the industry. Accordingly, a number of options have been discussed to modify the price support program, including raising the overall level of the support price (and thus the related commodity purchase prices), making administrative changes such as allowing the CCC to purchase a wider range of dairy products, and eliminating the program altogether. Figure 37 shows the effects of various options to change the dairy price support program under low- and high-price scenarios over the short and long terms. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? One option for modifying the dairy price support program is to raise the support price and the related commodity purchase prices. Many dairy experts indicated that the support price has fallen below the costs of production for most farmers and, therefore, is not providing an effective safety net during periods of low prices. Additionally, sources cited the reduction in the support price as a factor in increasingly volatile milk prices. For example, one academician we contacted stated that recent volatility in milk prices has resulted from the virtual elimination of the price support system as an effective price floor during periods of low milk prices. The price support program also worked to reduce price volatility during periods of high milk prices by releasing CCC stocks of purchased dairy commodities when market prices reached 110 percent of the support price. However, without purchasing sufficient quantities of manufactured dairy products, the program does not perform this balancing function. According to an economist with USDA’s Farm Service Agency (FSA), raising the support price would increase farm income and, thus, raw milk production. The economist indicated that this option, by raising the floor for milk prices, would also reduce price volatility; however, increased purchases of dairy products would mean higher federal costs for the price support program. Similarly, the American Farm Bureau study found that while raising the support price would reduce price volatility, it could create a situation in which the CCC purchases surplus dairy products in most years. The FSA economist noted that these federal costs might be offset, at least in part, by a reduction in payments under the MILC program. The economist also said that consumer costs would likely be higher, on average, because increasing the support price would limit the fall of prices for manufactured products. The FSA economist also noted that increasing the support price would decrease the economic efficiency of federal dairy policies, particularly to the extent that a higher support price stimulates increased production. Under this scenario, increased production would represent an allocative inefficiency because resources would go into producing milk that is not needed to supply the market. Some sources cited examples of how high support prices under the program led to misallocation of resources into surplus milk production. For example, from 1977 to 1981, the support price for Grade A milk rose from $8.26 to $13.10 per hundredweight, and annual government expenditures on dairy price supports increased from a few hundred million dollars to over $2 billion. A number of options have been proposed or discussed to change the administration of the dairy price support program. These options include allowing the CCC to purchase a wider range of products, adjusting commodity purchase prices based on market conditions, and setting commodity purchase prices to reflect cost differences between selling to the CCC and selling in the marketplace. Allow the CCC to Purchase a Wider Range of Products One potential modification to the administration of the price support program would be to allow the CCC to purchase a wider range of products than butter, cheese, and nonfat dry milk. Some dairy experts and studies indicated that by focusing on the purchase of a few specific commodities, the price support program distorts the market by providing incentives to overproduce these commodities, while at the same time dampening incentives for innovation in the dairy industry. Manufacturers that develop innovative products incur more risk because they will not be able to sell their products to the government if they cannot obtain a market price high enough to cover their costs. One cooperative representative said that while nonfat dry milk contains protein and calcium—both valuable components that could be used in other products, such as protein bars—manufacturers continue to produce nonfat dry milk in excess quantities because that is what the government is buying. A number of other sources, including industry representatives, academicians, and a report by the International Trade Commission, noted that by purchasing nonfat dry milk, the price support program may be impeding development of a domestic milk protein concentrate industry by creating disincentives to shift raw milk supplies to innovative products. USDA officials cautioned that in order for a product to function well as a price support product, it must (1) represent a major use of milk; (2) have enough extra capacity to absorb a substantial amount of milk; (3) be storable for long periods; and (4) have an active, liquid wholesale market. Given these conditions, it is questionable whether some alternative products, such as protein bars, would be effective as price support products. Furthermore, the officials argued that requiring the price support program to incur the risk of product innovation through this approach would alter the fundamental purpose of the program—supporting farm prices. If the CCC were to purchase a wider range of products, manufacturers would have greater incentives to use milk in alternative ways because the price support program would decrease the risks of trying to produce and market a greater number of products. Consumers could benefit as innovative products gained easier access to the market, causing consumer prices to fall. However, one FSA economist stated that this option would greatly increase the complexity of the price support program, potentially increase federal costs, and require new legislation. Moreover, he noted that this change would require close coordination between trade policies and the price support program. For example, without tariff-rate quotas on some products such as milk protein concentrates, the CCC could end up supporting additional imports if the purchase prices were set too high. The economist stated that this option could reduce production of nonfat dry milk but would be less likely to affect price volatility. Adjust Commodity Purchase Prices Based on Market Conditions A second potential modification would be to adjust—or tilt—commodity purchase prices based on market conditions. Some dairy experts, as well as academic studies, reported that because USDA does not tilt CCC purchase prices frequently enough to maintain a balance between butter and nonfat dry milk purchase prices that is based on current economic conditions, the price support program has distorted the market with unclear price signals and induced surplus production of certain goods (notably nonfat dry milk). Therefore, some experts indicated that tilting prices based on established criteria would be better. For example, one dairy processor recommended changing the balance of butter and nonfat dry milk purchase prices automatically if the ratio of CCC purchases of butter and nonfat dry milk falls outside a certain range. Similarly, the American Farm Bureau study suggested that to achieve the support program’s objectives without distorting the market and increasing government costs, changes to commodity purchase prices should be based on market conditions so that they would not be subject to political pressure. Basing the tilt of commodity purchase prices on market conditions would increase the economic efficiency of the price support program by reducing the price distortions that lead to surplus production of goods that are not required to supply the market. Falling market prices for a particular commodity suggest that the quantity supplied temporarily, at least, exceeds the quantity demanded. If the CCC continues to buy the commodity in significant amounts while the market price remains low, it distorts the market by providing an incentive to produce that commodity purely for the purpose of selling it to the government. This incentive not only delays a market response to the lower commodity price but also prevents that milk from going toward a higher-valued use. Moreover, a study by researchers at the University of Wisconsin found that tilts make it more likely that fluid milk prices will be driven by cheese prices because the price support program would no longer be supporting the manufacturing prices of Class IV products above market levels. This study reported that tightening the relationship between cheese prices and Class I prices improves market signals to dairy farmers because nationally, as noted earlier, a greater percentage of raw milk is used in Class III products (cheese) than in Class IV products (butter and nonfat dry milk). According to one FSA economist, lowering the purchase price of a particular commodity in the short term could reduce farm income; however, by encouraging production levels to respond more quickly to low price periods, tilting CCC purchase prices to reflect market conditions could maintain higher farm prices over the long term. The economist also indicated that this change could decrease federal costs for purchasing and storing dairy products. The consumer price effects of market-based tilts are less clear. For example, if market butter prices are high and market nonfat dry milk prices are low, a market-based response would indicate that USDA should raise the CCC butter price and lower the CCC nonfat dry milk price. However, further price adjustments could result in the CCC purchase price for butter exceeding the market price, which could trigger CCC purchases of butter and raise the market price even higher. In such cases, assuming that price changes at the wholesale level are passed on through the retail level, consumers would benefit from lower prices on some commodities, while potentially experiencing higher prices on others. The net effect to consumers would then depend on the relative price changes for these products and the quantities of each that were purchased. Reflect Cost Differences in Selling to the CCC A third option to change the administration of the price support program would be to set CCC purchase prices to reflect cost differences in selling to the CCC versus selling in the marketplace. In recent years market prices have fallen below the support price level in some months. For instance, between July 2002 and June 2003, the Class III milk price was below the $9.90 target level in 9 months. Although FSA officials indicated that USDA is required to set product purchase prices in such a way that only the average annual farm milk price, not the monthly price, is at the support price, the Farm Security and Rural Investment Act of 2002 (2002 Farm Bill) simply states that the price of milk should be supported at $9.90 per hundredweight. Concerns over USDA’s management of the price support program led to language in the Consolidated Appropriations Act, 2004 requiring the Secretary of Agriculture to more diligently support the farm price of milk. Some dairy experts and studies indicated that one reason that market prices sometimes fall below the CCC support price is because there are additional costs of selling to the government that are not reflected in the commodity purchase prices. Therefore, the effective support price is actually below $9.90 because with higher costs of selling to the government, the market price has to fall below the CCC purchase price before processors are better off selling to the CCC than to the market. Some of these additional costs include packaging for longer-term storage, meeting stricter grading standards, and a time lag between when the product is made and when it is approved for sale to the federal government. An FSA economist estimated that these cost differences amount to about $0.04 to $0.05 per pound for cheese. One option for reflecting the differences in cost between selling dairy products to the government and selling in the marketplace would be to raise the CCC purchase prices of these products to reflect the additional costs of manufacturing product for sale to the government. This would help ensure that manufacturers receive a price for their products that allows them to return at least $9.90 per hundredweight (the support price) to farmers. One FSA economist indicated that this change could cause farmers to marginally increase production, leading to increased CCC purchases. Increased CCC purchases, in addition to higher purchase prices, would increase the federal costs of the price support program and, at the same time, higher manufactured product prices could translate into higher consumer prices. However, it is difficult to estimate the added costs of selling to the CCC because these costs are likely to vary widely among different manufacturers. Thus, raising the product purchase prices could provide unwarranted benefits to some manufacturers while still being insufficient to induce sales to the government by some others. In addition, according to one academic study, there is no clear evidence that higher selling costs are the major barrier in selling to the government. Some experts have put forth the possibility that fixed contracts between dairy product manufacturers and their buyers may prevent manufacturers from selling to the government. An alternative way to reflect the differences in cost between selling dairy products to the government and selling them in the market place would be to require the CCC to alter product specifications and payment terms to conform to those used on the Chicago Mercantile Exchange (the Exchange). An FSA economist stated that some changes to product specifications are already being considered and have been put out for comment to the dairy industry. The economist stated that while these proposed changes will help bring CCC product specifications into greater conformance with market standards, some differences would remain. Most notably, the CCC requires that products be storable for up to 3 years, a longer period than is generally required in the market. To the extent that this proposal reduces additional costs of selling manufactured dairy products to the CCC by more closely aligning product specifications with market standards, it could induce greater manufactured product sales to the CCC and would keep market prices higher. Therefore, aligning these specifications could increase farm income, provide a marginal production stimulus, and raise federal costs related to additional CCC purchases. Also, this option would not necessarily prevent the Exchange prices from falling below CCC purchase prices because if there are barriers other than costs (such as contractual obligations) that prevent manufacturers from selling to the government, these barriers would still exist. Another policy option would be to eliminate the price support program altogether and rely on alternatives available to farmers to assist them in managing risk of low and/or highly volatile prices for their milk. A number of dairy experts have argued that the support price has been set so low at $9.90 per hundredweight that it is not having significant impacts. Thus some academicians, as well as USDA, have studied the potential effects of eliminating the program. In the short term, eliminating the program would have a greater impact if market prices were at or below the level of the support price. The May 2004 USDA study estimated that eliminating the price support program would cause wholesale prices of nonfat dry milk to decline by 15 to 20 percent over the first couple of years. For subsequent years, the study estimated that prices would recover somewhat to 10 percent below baseline levels. Further, the study estimated that the decline in nonfat dry milk prices would encourage diversion of this milk to alternative uses, leading to lower prices for these alternative uses. Generally, lower prices would reduce farm income and potentially lead to lower consumer prices. Farmers would likely respond to these lower prices by producing less milk. According to an FSA economist, eliminating the price support program would increase economic efficiency by allowing market price signals to be transmitted more clearly. The economist also stated that volatility in milk prices would increase. He noted that the combination of increased volatility and reduced farm income would force less efficient farmers to exit production. This exit would increase the economic efficiency of national resource allocation by enhancing current shifts in production to more efficient dairy farms. He added that in the absence of a price support program, new entrants to dairy production would likely be larger, more efficient operations. The FSA economist also said that eliminating the price support program would provide a cost savings for the federal government because the CCC would no longer have to purchase or store dairy commodities. These savings would be greater during periods when farm prices are low because when they are high, even if the program remains in effect, the government purchases fewer dairy products and incurs less cost. However, when farm milk prices are low, savings from eliminating the price support program could be partially offset by increased payments under the MILC program for as long as that program continues. Over the long term, reduced production would mitigate some of the impacts of eliminating the price support program, because reduced supplies lead to increased prices (assuming demand stays the same). However, the USDA study estimated that even with the positive price effects resulting from reduced production, farm income would still decrease by approximately $3.5 billion over the long term. Additionally, without CCC purchases of dairy commodities, USDA would be unable to balance high market prices by releasing these stocks, thereby contributing to increased price volatility over the long term. The MILC program has benefited many smaller dairy farmers during the most recent period of low farm prices by providing them income support. However, by providing support to some farmers who otherwise might have exited the dairy industry, the program has slowed the normal downward supply response to lower farm prices and kept aggregate production higher during this period than it otherwise would have been. The MILC program is scheduled to expire at the end of fiscal year 2005. If it is not extended in some form, aggregate production is likely to respond more rapidly to future low-price periods because smaller farmers are likely to exit production at greater rates than they did during the most recent period of low farm prices. With this more rapid production response, farm prices would likely start rising again sooner than in the recent past when the MILC program has been in place. However, although production levels in the short term would likely decrease more during low-price periods, in the long term aggregate production might not decrease substantially because higher average farm prices would stimulate additional production from the dairy farmers that stay in business. In addition, by allowing the MILC program to expire, the government can avoid the costs of the payments to farmers that the program provides. There are several options to maintain the benefits of the MILC program to some dairy farmers by extending it beyond 2005. One option is to extend MILC at its current target price and eligible production limit. A second option, a proposal introduced in the Senate in the 108th Congress, would extend MILC through fiscal year 2007 with an increase in the eligible production cap from 2.4 million pounds to 4.8 million pounds. A third option would extend MILC with a lower target price but a higher or no eligible production limit. Figure 38 shows the effects of various policy options to extend the MILC program under low- and high-price scenarios over the short and long terms. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? estimated that compared to a baseline estimate in which the MILC program expires in 2005, extending the MILC program through 2012 in its current form would result in greater milk production and lower farm prices. Production was estimated to be 0.8 billion pounds higher in 2006 and to average 1.4 billion pounds per year higher from 2008 through 2012. The estimated price difference was also greater in the longer term than initially. Greater production and lower farm prices are consistent with the expectation that extending the MILC program would keep some smaller dairy farmers in the industry who otherwise might exit after 2005 if the MILC program is allowed to expire then. This study also estimated that due to the MILC payments, farm income would increase if the program were extended, despite lower farm prices. Initially, in 2006, the estimated increase in farm income was $0.43 per hundredweight, and even in 2012, when the estimated farm price was $0.50 per hundredweight below what it would be if MILC expires after 2005, the estimated increase in farm income was $0.08 per hundredweight. In addition, extending MILC in its current form would increase federal costs. This study estimated that, on average, annual government costs from 2006 through 2012 would be about $1.2 billion higher than if the MILC program expires after 2005. According to an FSA economist, lower farm prices resulting from extending MILC in its current form could be passed on as lower retail prices for consumers. However, the economist indicated that the effects on price volatility are less clear. He added that the extension of the MILC program as currently designed would continue to favor smaller farmers over larger farmers, because a greater percentage of smaller farmers’ production is eligible for MILC payments. Thus, in general, the MILC program would continue to benefit farmers in the eastern and upper midwestern states over farmers in the western states. Therefore, for a given level of milk production and to the extent that larger farmers in the West are more efficient than smaller farmers in the East, extension of the MILC program as currently designed would reduce the economic efficiency of the allocation of dairy production resources nationally compared to the allocation that would occur if MILC expires. An alternative proposal has been introduced in the Senate in the 108th Congress that would extend the MILC program at its current target price but increase the cap on eligible production. The University of Missouri study examined the impacts of a similar option in which the cap on eligible production is removed in 2003 and MILC is extended through 2012 without a cap. This study estimated that extending MILC without a cap would result in a larger increase in production and a larger decrease in farm prices than extending MILC with a cap. For example, production in 2006 was estimated to be 2.3 billion pounds per year higher than if MILC is allowed to expire or 1.5 billion pounds greater than if MILC is continued in its current form. Greater production and lower farm prices are consistent with the expectation that making all milk eligible for MILC payments would provide farmers who might otherwise exit the industry after 2005 (if MILC is allowed to expire) an even greater incentive not to leave than is provided by extending MILC in its current form. As with the previous option, this study estimated that due to MILC payments, farm income would increase if MILC were extended without a cap despite lower farm prices. Initially, in 2006, the estimated increase in farm income would be $0.63 per hundredweight, and even in 2012, when the farm price was estimated to be $1.05 per hundredweight below what it would be if MILC expires after 2005, the estimated increase would be $0.18 per hundredweight. Moreover, this study estimated that farm income would be higher with this option than with extending MILC in its current form because the additional payments to farmers due to eliminating the cap would more than offset the additional farm price reduction resulting from greater production. Extending MILC without a cap would increase federal costs even more than extending MILC in its current form. This study estimated that, on average, annual government costs from 2006 through 2012 would be about $2.5 billion higher than if the MILC program expires after 2005, more than $1 billion per year more than was estimated if MILC is extended in its current form. An FSA economist indicated that consumers would likely benefit from reduced retail prices under this scenario. Additionally, without the cap on eligible production levels, the equity concerns about MILC would be eliminated because the program would no longer favor smaller farmers over larger farmers. However, the FSA economist also indicated that in addition to the inefficiencies of increased surplus milk production, this alternative would reduce economic efficiency of milk production compared to allowing MILC to expire after 2005 to the extent that it provided incentives for production by smaller producers with higher costs of production. A third alternative, considered in the American Farm Bureau study, is to extend MILC with a lower target price and a higher or no cap on eligible production. The study argued that the $16.94 per hundredweight target price is too high because MILC payments are triggered any time the Class I mover is less than $13.69 per hundredweight; at this target price, MILC payments could be expected in most months given that from 1990 through 1999 the Class I mover averaged $12.28 per hundredweight and was below the $13.69 threshold in 104 of 120 months. The analysis concluded that a target level for MILC payments of a Class I mover at $12.00 per hundredweight or lower would help to make the program more marketbased. Further, expanding the cap or eliminating it completely would help to make the program more equitable among farmers of different sizes. The effects of extending MILC in the absence of a cap on eligible production depend significantly on the level of the target price. If the target price were set too high, it would stimulate surplus production. During periods of low prices, the government would have to contribute additional MILC payments to counteract the effects of lower prices. Additional production would also decrease manufactured product prices, potentially increasing the costs of the dairy price support program because these prices would be more likely to reach the support level. At the same time, these lower prices could provide benefits for consumers. While a lower target price without a cap would treat farmers with different sized herds the same way, it might not provide enough payments to keep farmers in areas with high costs of production from exiting during periods of low prices. Therefore, this option could increase the economic efficiency of national dairy production, while also accelerating shifts in dairy production to the West. Some dairy experts indicated that in lieu of maintaining both the dairy price support and MILC programs, operating one new target price deficiency payment program could be a better alternative. USDA and other dairy experts stated that having both the price support and MILC programs is problematic, particularly during periods of low prices. As noted in the American Farm Bureau study, the idea behind establishing a target price deficiency payment program is to allow markets to work to clear dairy products at market prices and then, when the market price is below the target price, to pay farmers based on the difference between these prices. But with both dairy programs in force, MILC maintains and encourages surplus milk production that must then be purchased by the CCC under the price support program. This market distortion adds to the costs of both programs. Under a new target price deficiency payment program, dairy farmers would receive a payment when the market price of Class III milk products drops below a specific target level. Thus, the program would establish a floor on farm income through a countercyclical payment to dairy farmers instead of a floor on manufactured product prices with purchases by the CCC. Instead of providing an incentive for manufacturers to continue producing a particular product that the CCC is purchasing, the program would allow the market to clear a wholesale product price and then pay farmers the difference if the price were too low. This option would provide manufacturers the incentive to shift raw milk supplies to their highest- valued use, further promoting the development of new and innovative products. This option could also potentially reduce federal costs, depending on the level of the target price; one expert estimated that with a target price of $10.50 per hundredweight for Class III milk, the government would have spent $300 million less than under the MILC program since its inception in December 2001. With a target price of $10.00 per hundredweight for Class III milk, these savings would have reached $1.2 billion. Notwithstanding these potential benefits, dairy experts indicated that a new target price deficiency payment program could have its own challenges depending upon how it is designed. In particular, such a program would require some key decisions regarding price level and regional differences, and whether to cap program benefits based on payments or quantities of production. Some dairy experts said that a problem with the target price deficiency payment approach is that it would be hard to determine the appropriate target price without creating distortions in production incentives. If the target price were set too high, it would have the same effect as setting the support price too high: it would lead to excess production by supporting farm income at higher levels than would be available if farmers received market prices. Long-term overproduction would place additional downward pressure on market prices and increase federal costs for the program. If the target price were set at a low level to avoid stimulating production and increasing government costs, it might not maintain adequate support for farm income, and could increase price volatility. Additionally, by influencing domestic production levels, which in turn influence U.S. market prices, the level of the target price can affect the incentives of other countries to export manufactured products to the United States. However, these incentives are also affected by the export subsidies and lower production costs of some other countries. The difficulty in setting the appropriate target price is exacerbated by regional differences in costs of production. A certain Class III target price might provide adequate support during periods of low prices based on the costs of production in one region, but not in another. However, increasing the target price to provide adequate support for higher-cost regions would not only support production in areas where it is less economically efficient to do so, but would also provide greater benefits to farmers in lower-cost regions. These higher benefits would increase the incentives to overproduce in those areas. Given current trends in the United States, this scenario would encourage the western shift in dairy production. Another challenge in designing a target price deficiency payment program would be to determine whether to cap the program’s benefits either by limiting the payment a farmer could receive or by limiting the quantity of milk production on which a farmer would be eligible to receive payments. The MILC program calls for payments equal to 45 percent of the difference between $16.94 and the Boston Class I per hundredweight price and has a production cap (2.4 million pounds of production per dairy operation each fiscal year). These controls have helped to keep federal costs lower than they otherwise would have been during periods of depressed prices by limiting incentives for overproduction. However, the production cap has targeted the program’s benefits primarily to smaller farmers, raising questions of equity. If a target price deficiency payment program were implemented without any controls, the risk of market distortions and increased federal costs from establishing a target price level that is too high could increase substantially. At the same time, a couple of researchers noted that establishing the target price deficiency payment program without a production cap could encourage farmers to enhance the efficiency of their dairy operations. With a cap limiting eligible production, farmers have less incentive to adopt new technologies that would increase production. However, whether or not a cap is placed on eligible production, the program is likely to confer benefits to farmers in varying degrees. If the program did not cap eligible production, farmers who could increase their production efficiency with new technology might have more of an incentive to do so. But the farmers who would be most likely to take advantage of this incentive would be larger farmers who may be more efficient, and might have the resources and access to capital to undertake such an investment. Conversely, capping eligible production would target benefits to smaller farmers in the same way as MILC. To address concerns about the pending expiration of the MILC program and provide additional support to dairy farmers, the National Dairy Equity Act of 2004 (NDEA) has been introduced in the House and the Senate. This proposed legislation would change the federal regulation of milk marketing through the establishment of regional dairy marketing areas in which boards created to administer these areas would set minimum prices that processors would have to pay for raw milk used to make fluid milk products sold in those areas. The NDEA would have a similar effect as the MILC program in that it might lead to higher incomes for some dairy farmers. However, concerns have been raised about its impact on farm incomes in some regions, retail fluid milk prices, coordination of milk prices across regions, and existing trade agreements. Figure 39 shows the effects of adopting the NDEA over the short and long terms. The NDEA would create five marketing areas that together would encompass the entire nation. States in the Northeast, Southern, and Upper Midwest regions would automatically be participating in the marketing area program established by the NDEA upon enactment of the legislation. States in the Intermountain and Pacific regions could become participating states by providing written notice to the Secretary of Agriculture. The NDEA would authorize each region’s board to set an “over- order” minimum price for Class I sales that exceeded the FMMO Class I price in that region, with an initial maximum of $17.50 per hundredweight, subject to approval by farmers within the region in a referendum. Although the boards would have discretion in setting the over-order price, the legislation directs the boards to consider several factors including the balance between production and consumption of milk and milk products in the regulated area; costs of milk production in the regulated area; prevailing price for milk outside the regulated area; purchasing power of the public; and price necessary to yield a reasonable return to an eligible farmer. The NDEA would establish a fund in the U.S. Treasury to carry out the program. During months in which a region’s over-order price exceeded the FMMO Class I minimum price in Boston, processors would be required to pay the Secretary of Agriculture an amount equal to the difference between those two prices, known as the over-order premium, times the quantity of milk purchased for use in Class I products. The Secretary would deposit these amounts into the U.S. Treasury fund, and the fund would make payments to each board, which would distribute the payments to eligible farmers in its region. Each month the board would receive at a minimum an amount equal to the over-order premium times 50 percent of the milk produced in that region. The proposed legislation would require CCC funds to be transferred to the fund when necessary to allow the fund to make the required payments; according to one analysis, such contributions would typically be necessary because only one of the proposed dairy marketing areas has a Class I utilization rate equal to 50 percent or higher. The NDEA also would require boards to compensate the CCC for any additional costs of CCC purchases of milk products resulting from increases in milk production that exceed the national average growth rate. To manage overproduction of milk that could result from the NDEA, the NDEA would authorize boards to take action, including developing and implementing incentive-based supply management programs. In addition, the NDEA would link participation in the dairy marketing areas with participation in the current MILC program. Farmers who participate in the new program would not be able to continue to receive MILC payments. If states in the Intermountain and Pacific regions chose not to participate in the NDEA program, farmers in those states could continue to receive MILC payments, and the NDEA would extend the authorization period for the MILC program until the end of September 2007. States in the other regions, which would become participants upon enactment of the NDEA, could withdraw their participation. If they did, farmers in those states could also continue their MILC payments through the end of September 2007. Individual farmers in states that participated in the NDEA program could choose not to participate in the new program and would then be able to continue receiving MILC payments. However, those farmers would not be able to extend their payments beyond September 2005 and would not be eligible for subsequent participation in the NDEA program. Although the NDEA may lead to higher incomes for some dairy farmers, academicians and industry participants have raised many concerns about the proposed legislation’s impacts. These concerns include regional divisiveness due to lower incomes for dairy farmers in some regions, higher retail prices, reduced coordination of dairy prices across regions, and potential conflict with World Trade Organization (WTO) rules. The NDEA could lead to higher incomes for farmers in participating states when milk prices are relatively low because processors would have to pay the over-order prices set by the boards for Class I milk; however, the total effect on farmers would depend on what happens to the price of milk used for manufacturing purposes as well. To the extent that higher blend prices stemming from higher Class I prices lead farmers to increase milk production, the result could be lower prices for Class III and IV milk both in participating and nonparticipating states. Even if all states participated in the dairy marketing areas, the large differences in Class I utilization rates across regions imply that different regions would be affected differently by the combination of higher Class I prices and lower Class III and IV prices, and farmers in regions with low Class I utilization rates might see a decline in their incomes. If some states do not participate, their farmers would be even worse off because unless these farmers pool milk in states that are included in the marketing areas, they would not receive any benefits of higher Class I prices. In a report that we issued in September 2001 in which we analyzed the inter-regional impacts of various scenarios in which some states were grouped in dairy compacts that functioned like the NDEA’s dairy marketing areas, we reported that one effect of compacts was to reduce farm income in noncompact regions. We estimated this effect to be minimal when we examined the impact of the Northeast Interstate Dairy Compact because the six New England states included in that Compact produced only 3 percent of the nation’s milk. However, we estimated that the effect was somewhat greater in a scenario in which states producing 27 percent of the nation’s milk supply were included in compacts. To the extent that the NDEA would result in fluid milk processors paying more to buy their milk from farmers, the NDEA would also lead to increases in retail fluid milk prices. In our report on compacts, we reported that several studies concluded that the Northeast Interstate Dairy Compact resulted in higher retail prices for fluid milk in New England, with estimated impacts ranging from $0.03 to $0.20 per gallon. Higher retail prices could have a greater effect on retail sales in upcoming years than occurred in the past, as some dairy experts believe that the demand for fluid milk has become more responsive to price changes, given the increasing number of beverages that are considered substitutes for fluid milk, among other reasons. Consequently, retailers with whom we spoke generally opposed the NDEA. Furthermore, declines in fluid milk sales would cause more milk to be available for manufacturing purposes, which would further depress the prices for Class III and IV milk. Several academicians told us that they believed the NDEA would also create regional distortions because price-setting in each dairy marketing area would be controlled by its board, and prices for raw milk used in fluid products would no longer be closely linked to prices for raw milk used in manufactured products. This would be a major change from the current system in which Class I prices are set based on differentials added to the “higher of” the advanced Class III or IV skim milk values, with the differentials still somewhat reflective of the costs of transporting milk from the Upper Midwest, a key dairy surplus region. Before the 1960s, Class I prices in different orders were not coordinated, and the resulting disorderly marketing system led to the coordinated system that we now have. Adopting the decentralized price-setting system of the NDEA risks losing the advantage of more orderly marketing that the coordination of the 1960s brought to the dairy industry. Concerns about whether the NDEA would make U.S. dairy policy less consistent with existing agreements under the WTO arise because of the effects of the NDEA on milk production and, hence, U.S. milk prices. As indicated previously, recent U.S. commitments under the General Agreement on Tariffs and Trade are leaning in the direction of more liberalized trade. To the extent that the NDEA provides a subsidy for U.S. milk production and reduces the prices of manufactured dairy products, the act would reduce the competitiveness of imported products. We identified one study that estimates the effects of the NDEA on milk production, farm prices, and government costs compared to a baseline scenario that did not include the NDEA. This study estimated that the NDEA would increase milk production compared to the baseline by an average of about 7.6 billion pounds per year during the period from 2006 through 2013. The increased production would result in estimated declines in Class III and IV prices from the baseline such that average milk prices would be $1.17 per hundredweight below the baseline estimate. The estimated annual average increase in federal costs for payments to the boards was $1.7 billion. Recent concerns about the effects of imported dairy products, most notably milk protein concentrates, on U.S. dairy prices have highlighted the importance of U.S. trade policy—trade restrictions and subsidy programs—as a foundation for domestic dairy policies. Several policy options related to international trade in dairy products have been suggested. As noted previously, current international trade agreements and ongoing negotiations can have implications for certain policy options that have been suggested. These options include (1) increasing trade restrictions, specifically for imports of milk protein concentrates; (2) relaxing trade restrictions; (3) introducing domestic subsidies for products significantly affected by international trade competition, specifically establishing a subsidy program for domestic production of milk protein concentrates; and (4) changing the Dairy Export Incentive Program (DEIP), either by using it more effectively or by eliminating it. Those options that succeed in limiting imports or encouraging exports of manufactured dairy products could support higher farm income, production levels, and consumer prices. These options may also reduce federal costs if higher farm prices reduce costs to the price support program to a greater degree than the cost of these trade options. Figure 40 shows the effects of options to change trade restrictions and export incentives over the short and long terms. ? ? ? ? ? ? ? ? ? ? One international trade policy that has been proposed is to increase trade restrictions within the constraints of existing international trade agreements. More specifically, a bill entitled the Milk Import Tariff Equity Act has been introduced in the Congress that would impose tariff-rate quotas on dairy protein products such as milk protein concentrates and certain casein products. Import quotas prior to the WTO Uruguay Round Agreement on Agriculture did not cover milk protein concentrates and casein. Therefore, no tariff-rate quota was established for these products after the agreement was implemented. The May 2004 report by the International Trade Commission showed that U.S. imports of some dairy proteins increased significantly from 1998 through 2000 and then declined. Some of these protein imports displaced domestic dairy proteins, particularly those used in making processed cheese products not covered by the Food and Drug Administration’s standards of identity. The International Trade Commission report concluded that imports of milk protein concentrates to the United States have the effect of lowering U.S. farm prices either directly or indirectly, depending upon whether U.S. market prices for manufactured dairy products are above or at the level of the support price. If U.S. prices are above the support price, then imports of dairy proteins could directly lower the market prices of nonfat dry milk, butter, and cheese to the extent these proteins can be imported at lower prices than proteins available in the domestic market. In turn, lower product prices could reduce the prices received by U.S. farmers for their raw milk. If U.S. prices are at the support price, then imports of dairy proteins could indirectly affect U.S. market prices. Increasing imports of proteins when U.S. market prices for manufactured dairy products are at the support price will cause the CCC to purchase more nonfat dry milk as this alternative protein source is displaced in the market. Eventually, these increasing stocks could cause USDA to lower the purchase price of nonfat dry milk in an attempt to reduce federal costs. This adjustment would then lower prices for manufactured dairy products, in turn decreasing the prices received by U.S. farmers for their raw milk. Because introducing tariff-rate quotas on milk protein concentrate and casein would likely reduce imports of these products, federal costs due to CCC purchases and storage of nonfat dry milk under the price support program would likely decrease. When U.S. market prices are above the purchase prices established by the price support program, a reduction in imports could maintain prices of some products, such as cheese, at higher levels due to reduced domestic supply of dairy proteins used in these products. Higher product prices could increase farm prices and thus stimulate additional production. These effects could be more significant in regions of the United States where raw milk is used to a greater extent in the manufacturing of Class III products. (If U.S. trading partners successfully challenge tariff-rate quotas on dairy proteins, compensation such as increased market access for other products could be required, reducing the benefits to U.S. dairy farmers.) Because production adjustments tend to lag behind price changes, this additional production could delay adjustments to lower market prices in the future. Therefore, to the extent that the exclusion of imports masks market price signals that would exist without the exclusion, this policy option would decrease economic efficiency. Additionally, to the extent that changes in the manufacturing prices of dairy products are passed on through the retail level, consumers could experience higher prices for some products (such as cheese) and lower prices for other products (such as butter). A second option is to relax trade restrictions by reducing or eliminating tariffs on dairy products. Trade restrictions such as tariff-rate quotas support domestic programs such as the FMMO classified pricing system and the dairy price support program by limiting the available supply of dairy products to take advantage of higher U.S. market prices. Unilaterally relaxing U.S. trade restrictions would likely increase imports of manufactured products as foreign producers seek to take advantage of the higher prices available in the U.S. market. Despite higher transportation costs for imported products, some manufacturers might be able to import certain dairy products from U.S. trading partners at prices below U.S. market prices either because those partners provide export subsidies (as the European Union does) or because they have lower milk production costs (as Australia and New Zealand do). Relaxing trade restrictions such as tariff-rate quotas is unlikely to increase imports of fluid milk products because of health restrictions, transportation costs, and the perishable nature of these products. Increased imports could put pressure on the CCC to purchase larger quantities of manufactured dairy products, thereby increasing federal costs. At some point these pressures could become unsustainable, leading to a reduction in the support price or the end of the price support program and to a decline in the price of milk used for manufacturing purposes. Moreover, because the price of milk used in fluid milk products is based on the price of milk used in manufactured products, a decline in the prices of manufactured products from increased imports could lower average farm prices. As long as MILC is authorized, these lower prices could trigger additional MILC payments to farmers, further increasing government costs. In the short term, increased imports could cause U.S. prices to fall, resulting in a decline in farm income for U.S. dairy farmers. With reduced farm income, production would also decrease as less efficient farmers exit production. Over the long term, the decline in production could cause farm prices to rebound toward the levels that existed before trade restrictions were relaxed. Similarly, in the short term, lower farm prices are likely to lead to lower consumer prices for fluid milk and other dairy products, but as farm prices rise toward their previous levels due to production decreases, consumer prices for fluid milk and other dairy products would likely rise as well. Economic efficiency will increase with relaxed trade restrictions because such relaxation will allow market price signals to be more visible than with restrictions in place. These signals will lead to increased imports when it is cheaper to substitute increased imports for some domestic dairy production. Additionally, the economic efficiency of U.S. dairy production resource allocation could increase with relaxed trade restrictions as the reduction in domestic production is more likely to come from less efficient domestic farmers. Another proposed option is to support the development of domestic casein and milk protein concentrate production. Under the proposed U.S. Dairy Proteins Incentive Program, the CCC would make subsidy payments, on a bid basis, to entities that produce and market dairy proteins from skim milk. The proposed legislation would provide, among other things, that receipt of a payment is contingent upon the end use of the dairy proteins produced; that no applicant receives a payment if the contract submitted for review would undercut domestic prices for milk, nonfat dry milk, or dairy proteins; and that the sale of the dairy proteins represents a new use of domestically produced dairy proteins. This program’s potential impact on domestic dairy protein production depends on the relative profitability of these proteins, which, in turn depends on production costs and demand. The International Trade Commission’s May 2004 study on milk protein products found that, given disincentives inherent in the price support program and constraints on U.S. demand for dairy proteins other than nonfat dry milk, the profitability of domestic protein production could be limited. For example, the study reported that the price support program creates a disincentive for U.S. processors to produce dairy proteins other than nonfat dry milk because by purchasing nonfat dry milk the price support program reduces the financial risk of manufacturing that product. Processors of other proteins would need to invest in production facilities and then market their product without the benefit of a standing government offer of support. The study found that only under the most favorable conditions (high skim milk protein yield and low variable costs) would it be beneficial for U.S. processors to begin producing milk protein concentrate instead of nonfat dry milk. Even then, positive returns were only for milk protein concentrates with protein concentrations above 70 or 80 percent. The classified pricing system could create an impediment to the development of a domestic protein industry depending upon how milk protein concentrate is classified. Based on analysis presented in the International Trade Commission’s report, classification under a higher- valued class would require producers of milk protein concentrate to pay more for their raw milk supplies, thus reducing their profits. In addition, the report noted that since May 2002, the CCC has had a program to provide incentives to convert nonfat dry milk held in its stocks to casein. Under this program, the CCC accepts competitive bids for CCC-owned nonfat dry milk stocks for the manufacture of casein. However, while USDA has accepted some bids, in many cases processors’ bids have been so low that USDA has rejected them. Finally, the International Trade Commission’s report found that while milk protein concentrate is considered a useful additive to standardize protein content, the limitation on its use inherent in the Food and Drug Administration’s standards of identity further restrict domestic milk protein concentrate production. This limitation keeps the market for milk protein concentrates relatively small in comparison to the market for other dairy proteins. Given these restrictions, the International Trade Commission estimated that the total U.S. market for milk protein concentrate is 40,000 to 50,000 metric tons per year. A new production facility in New Mexico reportedly is capable of producing 16,000 tons annually. Therefore, barring a large drop in imports or changes to the standards of identity, the demand for milk protein concentrate would have to increase substantially to induce additional domestic production. To the extent that the proposed program can overcome these challenges, it could provide incentives for manufacturers to produce alternative dairy proteins domestically. Should these proteins replace some nonfat dry milk production, they could reduce federal costs for the price support program. However, the overall impact on federal costs would depend on whether these reductions are offset by the cost of the subsidy program itself. Various dairy experts disagree as to whether subsidizing domestic dairy protein production would result in a net increase or decrease in federal costs. For example, a study that was published in May 2004 by researchers at Cornell University concluded that reduced costs to the price support program would not be great enough to offset the cost of the subsidy program. Conversely, an analysis by the National Milk Producers Federation found that a protein subsidy program providing assistance up to $2.30 per hundredweight of skim milk would result in a net cost savings for the federal government. The study by Cornell University researchers also estimated that a subsidy program for casein and milk protein concentrate would raise average milk prices by $0.40 per hundredweight, yielding an increase in farm income of $913 million. These increases would have greater impacts on farm income and milk production in areas of the country with higher Class IV utilization, such as the West. Also, should these proteins replace some nonfat dry milk production, the resulting effects on the prices that consumers pay for products made with dairy proteins could be mixed. If dairy protein prices increase with reduced imports, consumer prices for those products for which they are an ingredient (such as cheese), could also increase to the extent that these price changes are passed through. However, additional production resulting from higher farm prices could lower consumer prices for butter. Over the long term, increased domestic production of casein and milk protein concentrate could lower their production costs and also the costs of other products for which they are ingredients. With the subsidy program in place, the economic efficiency of resource allocation would likely decrease as the government provides incentives for the production of proteins that could potentially be supplied more cheaply through imports. Some dairy experts indicated that DEIP can help reduce government expenditures by allowing USDA to subsidize exports rather than purchase products and maintain high stock levels through the price support program. However, in some cases, academic experts indicated that under current market conditions the impact of DEIP on U.S. prices is limited. For example, one source noted that with CCC purchases of nonfat dry milk at over 800 million pounds in 2002, and with over 1 billion pounds of CCC stocks on hand, DEIP currently has no impact on U.S. prices. Also, WTO commitments have limited the scope of DEIP. For example, in the Uruguay Round Agreement on Agriculture the United States committed to reducing the quantity of subsidized exports by 21 percent and the value of these exports by 36 percent over the period from 1995 to 2000. Therefore, a couple of alternative policy options have been discussed with regard to DEIP, using it more effectively or eliminating it entirely. Some dairy experts suggested that the government could make greater use of DEIP. The American Farm Bureau study concluded that DEIP may be underutilized, noting that it is difficult to develop foreign markets unless a commitment is made to serving the market. Such a commitment is more difficult if a given product is made available only when it is in surplus. The study criticized USDA for (1) being slow to invite and accept bids and (2) concentrating on products in surplus. Other dairy experts indicated that invitations for DEIP bids may be announced too late in the year for potential exporters to participate in the program due to seasonal sales patterns. To improve the effectiveness of DEIP, the American Farm Bureau study recommended three potential changes: Exporters should be encouraged to submit bids for products and countries that offer the greatest potential for longer-term market development. USDA should use DEIP in conjunction with the Foreign Agricultural Service to coordinate export assistance programs to fully develop markets. USDA should consider DEIP bids for any eligible products and not base acceptance primarily on removing surplus products from the domestic market. Bids should be accepted for products that may have the greatest market development potential and do not violate WTO subsidization volume limits. USDA needs to act under shorter time frames in reviewing and accepting DEIP bids to maximize the volume allowable under WTO rules. USDA indicated that it has announced and awarded subsidies under the DEIP program up to the limits allowed by WTO rules for nonfat dry milk and cheese and that DEIP-assisted exports of butterfat have varied depending on market conditions. USDA also noted that the Foreign Agricultural Service has worked closely with the dairy industry to ensure that national and annual DEIP assistance optimizes longer-term market development prospects and minimizes any potential detrimental effects on the U.S. market. For example, a Foreign Agricultural Service official stated that the Service generally tries to wait until the market for a particular product, such as butterfat, is in surplus before inviting bids for DEIP export subsidies. The official said that if the market is not in surplus, subsidized exports would increase U.S. market prices for products manufactured with butterfat, such as ice cream. USDA further indicated that expanding the use of DEIP is not possible as the program is bound by quantitative and monetary caps under WTO rules. Finally, USDA noted that the bid review and acceptance process occurs within a time span equivalent to less than one working day. Specifically, another Foreign Agricultural Service official stated that USDA responds to all bid proposals by 10:00 a.m. on the next business day after the proposals are submitted. To the extent that USDA could identify ways to make greater use of DEIP, the effects on the dairy industry of increased exports of U.S. dairy products could be similar to the effects of an increase in demand. In the short term, greater demand for dairy products would increase wholesale prices, raising farm prices and, to the extent that these changes are passed on to consumers, retail prices. While higher retail prices could cause marginal declines in domestic consumption, higher farm prices over the long term could stimulate additional production, which could put downward pressure on wholesale and retail prices. Under such a scenario, price volatility could decrease as the government balances swings in domestic manufactured product prices by adjusting its level of support for DEIP. However, economic efficiency would decrease to the extent that increased use of DEIP induces additional production that would not have occurred without the government program. Finally, in relation to federal costs, while increased use of DEIP could increase federal costs for the program, these increases might be offset by decreases in the costs of purchasing and storing commodities under the price support program. Further, greater competition for milk supplies used in manufactured products could increase Class I prices, decreasing MILC payments while the MILC program remains in existence. Eliminating DEIP could lower U.S. market prices and increase government costs, specifically for nonfat dry milk. For example, the May 2004 USDA study estimated that exports of nonfat dry milk under DEIP account for approximately 9 percent of total U.S. production. The study further found that in comparison to just eliminating the price support program, eliminating DEIP as well would reduce wholesale nonfat dry milk prices another 5 percent below baseline levels. Under this scenario, the study reported that for 2002 through 2007, farm income would decline by approximately $5.3 billion and payments required under the MILC program would increase by approximately $900 million. Therefore, while ending DEIP would eliminate the costs of the subsidies provided by the program, these cost savings could be offset by increased MILC expenditures. Also, should DEIP be eliminated without eliminating the price support program, there would be an increase in federal costs for purchases of commodities that would otherwise have been exported. The American Farm Bureau study reported that in the short term eliminating DEIP would cause dairy products formerly exported under the program to be commercially exported. However, the study found that over the long term dairy product prices in the U.S. market would be too high relative to world prices to allow formerly subsidized products to move as commercially exported products. In the short term, a decline in the prices of some manufactured products formerly exported under DEIP, such as nonfat dry milk and cheese, would lower the blend prices farmers receive which, in turn, would cause a decline in milk production. This reduced production could cause butter prices to increase. However, the overall effect is a decrease in both average milk prices and milk supplies. With lower prices for some manufactured dairy products and higher prices for others, consumer prices may rise or fall depending on the extent to which these price changes are passed on by retailers. In other respects, the elimination of DEIP could increase price volatility because USDA would lose an outlet to help balance surplus production. However, the supply adjustment could marginally increase economic efficiency as excess supply is wrung out of the market. Further, to the extent that DEIP is providing incentives for nonfat dry milk production rather than production of other dairy products, elimination of these incentives would also marginally increase economic efficiency. The increased volatility in farm milk prices has increased dairy farmers’ interest in managing the risk that low prices will reduce farm incomes. Risk management alternatives to stabilize dairy farmers’ incomes can take many forms. These include, among others, increased use of forward contracting to guarantee prices, such as through USDA’s Dairy Forward Pricing Pilot Program; revenue insurance policies that pay farmers when dairy proceeds fall below specified levels; and tax-deferred savings incentives that encourage setting money aside during higher income years that could be withdrawn during lower income years. Figure 41 shows the effects of options to facilitate risk management under low- and high-price scenarios over the short and long terms. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Forward contracting of milk—entering into a contract with a processor or cooperative to sell milk in the future at a guaranteed price—is one way for farmers to manage the risk that volatile prices create for their income. Although forward contracting may prevent farmers from benefiting from price increases, this risk management tool stabilizes their income and ensures that the price they receive does not fall below the contracted price. In this respect, forward contracting, like the MILC program, limits the decline in farm income from a fall in farm milk prices. Most, but not all, dairy cooperatives offer their members the ability to enter into forward contracts to guarantee the prices members will receive for their milk. One option that could allow dairy farmers to make more use of forward contracting to manage their price risk would be to extend and expand the Dairy Forward Pricing Pilot Program to cover Class I milk. The program was mandated by the Congress in 1999 and is scheduled to expire at the end of 2004. Under the pilot program, farmers are not allowed to enter into forward contracts for Class I milk. However, if the program were extended or made permanent, it could also be expanded to allow forward contracting on all classes of milk. The forward contracting pilot program offers farmers who sell to proprietary processors an option to enter into fixed-price forward contracts by providing processors with an exemption from paying the otherwise required FMMO minimum prices. (Dairy cooperatives are already exempt from paying the FMMO minimum prices.) Normally, when either cooperatives or proprietary processors buy milk from farmers under forward contracts, they can offset their risk that farm prices might be lower at the end of the contract period (in which case they would pay more for milk than their competitors who buy later at the lower price) with a futures market transaction in which they would gain an amount equivalent to the decrease in the farm price of milk. However, when prices rise during the contract, the cooperatives or processors will lose money on the futures market transactions and cannot afford to pay farmers more than the contracted price. The pilot program encourages proprietary processors to enter into forward contracts with farmers by exempting these processors from having to pay farmers the relevant order minimum price for the portion of their milk that is under forward contract. Without this program, proprietary firms might not readily enter into forward contracts because if they offset their risk of a price decline in the futures market and the prices were to rise above the contract price, they might have insufficient funds—including their futures market loss—to pay their forward-contracted farmers the minimum price. Some cooperatives have opposed proposed legislation that would make permanent the authority for forward contracting by proprietary processors on the grounds that allowing those processors to pay farmers less than the minimum price could undermine the federal order pricing system. However, one academic analysis of options to address price volatility indicated that this exemption would not undermine the federal order pricing system because while proprietary processors could pay farmers less than the minimum price, these processors would still have to pay minimum class prices into their federal order pools. Another academician told us that even if allowing forward contracting on Class I milk caused farmers to receive lower minimum prices in return for reduced risk, neither that price reduction nor any other rationale would be a good reason for not making fluid milk eligible for forward contracting by those farmers who want to enter into such contracts. In October 2002, USDA released a report that examined the performance of the pilot program from its inception, in September 2000, through March 2002. The report found that the average monthly price received for milk sold under forward contracts authorized by the pilot program was lower than the average monthly price that would have been received for that same milk if it had not been under contract by about $0.50 per hundredweight, but that the variation in price for milk sold under forward contracts was much less. At times the contract price exceeded the price that would have been received without the contracts, and at times it was lower. More recent USDA data show that the contract price exceeded the price that would have been received without the contracts in each month from April 2002 through July 2003, but for the remainder of 2003 the contract price was lower. On average, for the entire period from April 2002 through December 2003, the contract price was about $1.40 per hundredweight higher. Participation in the program during the period covered by the USDA report was relatively small and was concentrated most heavily in the Upper Midwest Order, which has a low Class I utilization rate. More recent USDA data show that participation remained relatively low through the end of 2003. Farmers in orders with high Class I utilization rates had less opportunity to participate because Class I milk was ineligible, so it is uncertain whether participation rates in those orders would have been higher if farmers were allowed to enter into forward contracts for Class I milk. One academic source also noted that even if the program were expanded to include Class I milk, fluid milk processors might be reluctant to engage in forward contracting for this milk because there is no good hedge in the futures market for Class I prices. In addition, the USDA study reported that participating farmers were generally more accustomed to using risk management tools than were nonparticipants. Farmer education on using forward contracting may be important to increase participation. Another option to help dairy farmers manage their price risk is revenue insurance. Revenue insurance allows farmers to protect themselves against loss of revenue from, for example, low market prices, high feed prices, or reduced production due to natural disasters. Revenue insurance can stabilize farm income, reducing the need for direct payments such as under the MILC program, during periods of low prices. Whether there would be a savings to the government would depend on whether the subsidies required to induce farmers to participate in the insurance program would offset the savings from reduced direct income support. Overall production could increase with this type of option because the revenue insurance would step in when prices are low representing in effect, a countercyclical payment. With this type of income support, downward supply adjustments during periods of low prices might not happen as quickly. Consumers would then benefit from prolonged periods of low prices to the extent that price changes are passed through the retail level. USDA’s Risk Management Agency currently operates several pilot programs in selected states, using three different approaches to revenue insurance. Although none of these programs applies to dairy farming, they could, in theory, be extended to cover dairy farmers. Doing so could be difficult, however, because the complexity of the dairy industry and the variation in management expertise among farmers would make it hard to estimate the probability of losses, a calculation that is necessary for pricing the insurance. Tax-deferred savings accounts allow farmers to manage fluctuations in farm income by accumulating cash reserves during higher income years with deferral of some tax liability. Farmers could then withdraw from these accounts in lower income years and, in essence, receive tax benefits if they accumulated funds in these accounts. One study suggested that farmers might be more comfortable with this risk management tool than with forward contracting because these accounts resemble individual retirement accounts and other familiar tax-deferred savings vehicles. Similar to revenue insurance, this option has the potential for reducing direct income support payments from the government, such as MILC payments, by stabilizing farm income during periods of low prices. However, whether these accounts worked as a risk management tool would depend on whether the authorization of these new accounts—with their tax benefits—led to substantial additional savings by farmers in higher income years, or whether the new accounts were simply funded with savings that would have been made anyway in other, not tax-favored, accounts. Also, like revenue insurance, withdrawals from tax-deferred savings would represent, in effect, countercyclical payments during periods of low prices. Thus withdrawals from tax-deferred savings accounts could maintain overall production by dampening supply adjustments during periods of low prices. Similarly, consumers would benefit from the prolonged periods of low prices to the extent that price changes are passed through the retail level. Many issues would have to be resolved to start this type of account, such as the amount that farmers would be allowed to deposit in any year, whether the government would match any funds deposited, and whether there would be restrictions on farmers’ ability to withdraw funds from the accounts based on price drops or income losses. Canada and Australia both offer these accounts, and they were first proposed in the United States in 1996. Since then, there have been several proposals to adopt them here, but none has been implemented. According to one dairy expert, given the long-term declining demand for fluid milk as well as the increasing productivity of dairy farmers, the best way to maintain farm income is through some form of effective supply management. Following periods of excess supply in the 1980s, the U.S. government introduced supply management initiatives, such as the Dairy Termination Program. Other options, such as production quota systems, have been tried by different dairy-producing nations. Thus, a number of options have been discussed to try to manage dairy supplies, including reintroducing a program similar to the earlier Dairy Termination Program, or establishing mandatory supply controls through quota allocations, as has been done in other countries. Figure 42 shows the effects of options to manage raw milk supplies under low- and high-price scenarios over the short and long terms. One option that has been discussed to manage milk supplies is to reintroduce the Dairy Termination Program. This program was first tried as part of the Food Security Act of 1985. Under the Dairy Termination Program, farmers submitted competitive bids for the minimum raw milk price per hundredweight for which they would be willing to comply with the program requirements. If their bids were accepted, farmers were required to sell for slaughter or export their entire herds and not participate in dairying for the next 5 years. The program was in effect from April 1986 through September 1987 and resulted in the removal of more than 1 million cows, or about 9 percent of the national dairy herd in 1985. In total, this culling of the national dairy herd was estimated to decrease milk supplies by about 39.4 billion pounds between 1986 and 1990 at a cost of more than $1.8 billion. California farmers accounted for the largest portion of this reduced production, but farmers in southeastern states had the highest rates of participation. In a 1989 report looking at the effects of the Dairy Termination Program, we indicated that it was unlikely to have a lasting effect on milk production, given that some participants would likely return to production after the 5-year waiting period. In the short term, high market prices resulting from lower levels of production reduced federal purchases of surplus dairy products. In 1989, we estimated that these reduced purchases provided a net cost savings to the government of $2.4 billion for fiscal years 1986 through 1990. However, over the long term, we predicted that increased production would bring the return of excess milk supply. The effects of reintroducing this supply management alternative would depend on a variety of factors. In particular, the effects on federal costs and milk production would depend heavily on how much farmers needed to be paid to terminate their herds and agree not to produce for a specific period of time. Farmers’ decisions about whether to participate in a new Dairy Termination Program at a certain price would rest on the individual profitability of dairy farms, the long-term production outlook of the individual farmer, and the expectation of certain market conditions. For example, with today’s high market prices, farmers may be unlikely to agree to stop production except at payment levels that could make the program prohibitively expensive. During low-price periods, the program could potentially reduce supply to a level that reduces overall government costs for both the price support program and the MILC program, as long as the latter program remains in existence. In either case, the program would likely increase price volatility and consumer prices, although higher consumer prices for manufactured products could be mitigated by additional imports. Given the fact that farmers in southeastern states had the highest rate of participation in the Dairy Termination Program, a reintroduction of this program is likely to strengthen western shifts in production. As compared to western farmers, eastern farmers tend to have lower profitability and higher costs of production. Consequently, they are more likely to participate in this kind of program and less likely to return to production after the program, because their re-entry costs are higher. Thus, a new Dairy Termination Program could have disparate regional effects. A second option that has been discussed for supply management is to implement a quota system, as has been done in Canada and in the European Community. Under this option, production would be controlled by allocating production shares, or quota shares, limiting how much milk each dairy farmer could market. Such quota shares could be set based on a farmer’s historical marketing level. Any milk marketed over the allocated quota shares would be priced far below the cost of production. Quota shares could be traded and increased over time as additional supplies are needed. A quota system would help manage supply by taking away incentives to increase production based on the benefits provided by government programs, such as MILC or the price support program. Moreover, compared to other supply management alternatives, a quota system has a greater likelihood of achieving long-term supply management, because production incentives would continue to be limited by the number of quota shares available in the system. With more effective supply management, federal costs for other programs such as MILC and the price support program would be reduced. Additionally, federal costs for administering a quota system are relatively low. In the short term, price volatility might increase, but as the market adjusts to a stabilized production level, long- term price volatility could be reduced. Nonetheless, there are some drawbacks to implementing a quota system. As quota shares reduced production, consumer prices could increase. While demand for fluid milk products is relatively price inelastic, higher prices could reduce long-term consumption by providing incentives to purchase substitute goods. Also, the distribution of quotas would provide a substantial benefit to current farmers to the detriment of farmers who might try to enter the system in the future, entrenching geographical production patterns and stifling incentives for technological enhancements. Given the high production costs in some areas and the greater efficiency of larger, newer dairy operations, this would represent an economic inefficiency because milk would not be produced and marketed as cheaply as possible. However, this drawback could be limited to the extent that a well-functioning market is established to trade quota shares. If participation costs in this market were kept low, farmers would still realize incentives to adopt technology enhancements. The most efficient dairy farmers’ willingness to pay for additional quota shares would represent their cost advantage over less efficient farmers plus some assessment of risk. To the extent that this willingness to pay was greater than the profits realized by less efficient farmers, these less efficient farmers would have an incentive to sell their quota shares to more efficient farmers. Thus, in the long term, the quota system might not hamper increased economic efficiency if trade is relatively easy. The following are GAO’s comments on the Department of Agriculture’s letter dated October 22, 2004. 1. We revised the report to reflect that farm prices during 2002 and 2003 were the lowest since 1979. 2. We revised the report to reflect that USDA has estimated that average 2004 farm prices will be more than $3 per hundredweight higher than they were in 2003. 3. We agree that there are limitations on the use of commissary data as a proxy for proprietary wholesale data. However, as noted in USDA’s comments, there seems to be no viable alternative. We revised the report to further discuss the limitations of using these data. 4. During the course of our work, USDA declined to provide us with a draft of its study. Although USDA indicates that it submitted its study to the Congress on September 10, 2004, we were unable to obtain a copy until early October 2004, well after we had provided a draft of our report to USDA for review and comment. Because of this timing, we were unable to fully consider and analyze the results of USDA’s study and related documents. Furthermore, although USDA notes that it developed quantitative estimates of the effects on producers (dairy farmers) and consumers and the cost of various federal programs under various policy scenarios, the scope of USDA’s analysis was more limited than the range of policy options discussed in our report. 5. We agree that the potential effects of various policy options in appendix VII are examined independently and qualitatively within the existing program structure. Our discussion of dairy policy options are not policy recommendations. As stated in the report, to identify these policy options and their potential impacts we relied heavily on a synthesis of the views of leading dairy experts and the results of an extensive literature search, including our review of more than 50 studies and other publications. Time and resource constraints for completing our work precluded us from developing or contracting for the use of an economic model that would have provided quantitative estimates of these potential impacts. In addition, some of the policy options would have been difficult to model and quantify, such as the potential impacts of accelerating USDA’s hearing and rulemaking process for amending FMMOs. The report also notes that we compared the policy options identified against a baseline scenario of policies in place as of August 2004. This baseline scenario existed at the start of our work and was needed to provide a consistent context for our analysis. 6. Regarding caveats, as noted in the report, we examined the impact of federal dairy program changes and policy options on six policy considerations: farm income, milk production, federal costs, price volatility, economic efficiency, and consumer prices. We acknowledge in the report that other stakeholders may have different views on the importance of these policy considerations or other considerations that we did not include in our analysis. The report also states that the potential effects of policy options on these considerations could vary depending upon economic conditions and other policy decisions. In this regard, we did not assess the options’ overall economic or budgetary impacts, or their consistency with U.S. international trade commitments or positions in ongoing negotiations. As indicated in the report, each option has varying potential impacts on the policy considerations used in our analysis. Despite these caveats, we believe this analysis is informative and helpful to congressional decision makers who must weigh competing interests in determining dairy policy. 7. We have made some technical corrections and clarifications in light of USDA’s comments, but we do not agree that we mischaracterized the operation of current programs or the effects that changes to current programs or the introduction of new programs would have on program outlays, producers (dairy farmers), and consumers. See also our responses in comments 8 through 19 below. 8. Although during the course of our work, USDA officials suggested that it was possible that dairy farmers might divide their holdings to make more of their milk eligible for compensation through the MILC program, we deleted this discussion from the report in light of USDA’s comment that it has no evidence that farmers have done this. 9. We correctly state in the report that farmers may choose the month that they begin accepting their payments. However, in response to USDA’s comment we revised the report to clarify that farmers’ discretion on when they receive MILC payments is limited by USDA’s regulations for implementing this program. Specifically, these regulations prohibit a farmer from selecting a month to receive payments if the month has already begun, if the month has already passed, or during which no milk was produced. A farmer also cannot change a previously selected start month after the 15th of the month before the month selected. Once monthly payments begin, a farmer has no discretion in determining in which month or months to receive payments. 10. The discussion of expanding the use of DEIP in the draft report did not suggest that WTO rules, including quantitative and monetary caps and product-specific restrictions, be violated. Rather, this discussion identified ways in which dairy experts suggested that DEIP might be used more effectively as a marketing tool. However, we have revised the language in the report to more fully reflect USDA’s views and to minimize confusion as to what we mean by increasing the use of DEIP as a marketing tool without exceeding WTO caps. 11. We revised the language in the report to clarify that USDA has not proposed or considered any proposal to eliminate the Dairy Price Support Program. However, USDA analyzed the potential effects of eliminating this program in the study it prepared in response to the 2002 Farm Bill mandate. 12. We agree with USDA that ensuring an adequate level of milk production is not an objective of the FMMO program. We revised the report accordingly and added language suggested by USDA to better describe the FMMO program’s objectives. 13. We agree that price volatility contributes to disorderly market conditions, and we revised the report to better explain the potential causes of price volatility. We also agree that FMMOs cannot directly address price volatility in wholesale dairy markets. However, by setting minimum prices that must be paid to farmers for raw milk, the FMMOs can affect the extent to which price volatility is reflected in the prices that farmers receive. 14. We revised the report to reflect that cooperatives owning the capacity to produce multiple products may still have an incentive to shift milk to the higher-valued use, in order to provide greater returns to their members. We also expanded our discussion of other factors that might influence how milk is used, such as transportation costs and changes in processing technology. 15. We acknowledge that the report does not explain how a competitive pay price series could be created for use in the FMMO program. However, this option was identified by stakeholders during the course of our work. Other options discussed in the report also may present challenging implementation issues and in many cases the report discusses those issues. Nonetheless, we revised the report to reflect that USDA and a panel of academicians attempted to devise a competitive pay price series but ultimately were unsuccessful. We also revised the report to note that a key difficulty in developing a competitive pay price series is the need for data that are not already influenced by the FMMO classified pricing system. 16. We acknowledge that the report does not explain how, after combining Class III and Class IV, milk would be priced in the expanded class. However, this option was identified by stakeholders during the course of our work. Other options discussed in the report also may present challenging implementation issues and in many cases the report discusses those issues. Nonetheless, we revised the report to reflect that a barrier to combining Class III and IV is identifying an appropriate pricing formula that considers the products in an expanded class. 17. We revised the report to reflect this clarification and added language suggested by USDA to better describe the FMMO program’s objectives. 18. We agree with USDA that any area’s supply of fluid milk can come from local or distant farmers. USDA noted that the Class I price surface, generated from different minimum Class I prices in different locations, reflects the cost of moving milk from surplus to deficit markets. However, as we pointed out in a 1988 report, when the price surface does not also account for regional differences in production costs, it can result in incentives for overproduction in certain regions. Furthermore, the Class I price surface that resulted from the 2000 federal order reform differs from USDA’s recommended option. 19. The report accurately reflects the views of some stakeholders that the slowness of USDA’s hearing and rulemaking process used to modify FMMOs inhibits the agency’s ability to respond to changing market conditions or the marketing of new products. The report also discusses challenges USDA faces to improving this process while ensuring the promulgation of economically sound regulation. Further, the report notes that USDA has made efforts to improve the hearing process, particularly in the way it evaluates its contracts for hearing transcripts. In addition to the individuals named above, Jay Cherlow, Barbara El Osta, Joshua Habib, Eileen Harrity, Christopher Murray, and Cynthia Norris made key contributions to this report. Important contributions were also made by Beverly Ross, Jena Sinkfield, and Amy Webbink. Dairy Industry: Estimated Economic Impacts of Dairy Compacts. GAO- 01-866. Washington, D.C.: September 14, 2001. Dairy Industry: Information on Milk Prices and Changing Market Structure. GAO-01-561. Washington, D.C.: June 15, 2001. Fluid Milk: Farm and Retail Prices and the Factors That Influence Them. GAO-01-730T. Washington, D.C.: May 14, 2001. Dairy Products: Imports, Domestic Production, and Regulation of Ultra- filtered Milk. GAO-01-326. Washington, D.C.: March 5, 2001. Dairy Industry: Information on Prices for Fluid Milk and the Factors That Influence Them. GAO/RCED-99-4. Washington, D.C.: October 8, 1998. Dairy Industry: Information on Marketing Channels and Prices for Fluid Milk. GAO/RCED-98-70. Washington, D.C.: March 16, 1998. Dairy Programs: Effects of the Dairy Termination Program and Support Price Reductions. GAO/OCE-93-1. Washington, D.C.: June 15, 1993. Federal Dairy Programs: Insights Into Their Past Provide Perspectives on Their Future. GAO/RCED-90-88. Washington, D.C.: February 28, 1990. Milk Pricing: New Method for Setting Farm Milk Prices Needs to Be Developed. GAO/RCED-90-8. Washington, D.C.: November 3, 1989. Dairy Termination Program: An Estimate of Its Impact and Cost- Effectiveness. GAO/RCED-89-96. Washington, D.C.: July 6, 1989. Milk Marketing Orders: Options for Change. GAO/RCED-88-9. Washington, D.C.: March 21, 1988. Overview of the Dairy Surplus Issue—Policy Options for Congressional Consideration. GAO/RCED-85-132. Washington, D.C.: September 18, 1985. Effects and Administration of the 1984 Milk Diversion Program. GAO/RCED-85-126. Washington, D.C.: July 29, 1985.
|
In 2003, U.S. dairy farmers marketed nearly 19.7 billion gallons of raw milk, one-third of which were used in fluid milk products. Farmers, cooperatives, processors, and retailers receive a portion of the retail price of milk for their part in providing milk to consumers. During 2002 and 2003, farm prices fell while retail prices did not similarly decline. This pattern raised concerns about a growing spread between farm and retail prices. Farm prices have since increased, reaching record highs in April 2004. As requested, GAO examined (1) the portion of retail milk prices received by farmers, cooperatives, processors, and retailers, how this changed over time, and the relationship between price changes at these levels; (2) how various factors influence prices and affect the transmission of price changes among levels; and (3) how federal dairy program changes and alternative policy options have affected or might affect farm income and federal costs, among other considerations. Between October 2000 and May 2004, on average, farmers received about 46 percent, cooperatives 6 percent, wholesale processors 36 percent, and retailers over 12 percent of the retail price of a gallon of 2 percent milk (the most common type of milk purchased) in the 15 U.S. markets GAO reviewed. During this period, in 12 of the 15 markets, the spread between farm and retail prices increased. However in some markets, the price spread between these levels increased and then moderated. Price changes at one level were most closely reflected in changes at adjacent levels of the marketing chain. Farm, cooperative, wholesale, and retail milk prices are determined by the interaction of a number of factors. For example, farm prices are affected by the supply of raw milk and the demand for milk products such as fluid milk, cheese, and butter, as well as by federal and state dairy programs. At the cooperative level, prices are influenced by the cost of services that cooperatives provide, and the relative bargaining power of cooperatives and milk processors. At the wholesale and retail levels, input costs such as labor and energy, and the continued consolidation of firms influence milk prices. Recent changes in federal dairy programs have affected farm income, federal costs, and other considerations. For example, the Milk Income Loss Contract program has supported some farm incomes but has exceeded initial cost estimates because of low farm prices. A number of options have been suggested to change federal dairy policies such as amending federal milk marketing orders and raising or eliminating the support price. In general, these options would have mixed effects depending upon whether milk prices were high or low over the short or long term. For example, options that increase farm income over the short term tend to increase milk production and lower farm prices over the long term. These options also tend to be costly for the federal government during periods of low prices.
|
Key DOD financial managers face considerable challenges in addressing the financial management needs of a DOD organization that is without parallel in the size, diversity, and complexity of its operations; repeated audit findings that deficiencies in personnel experience or competencies are a major contributor to DOD’s continuing financial deficiencies; and existing and enhanced accounting requirements that must be implemented throughout DOD. DOD’s financial managers are responsible for managing the financial operations of one of the largest and most complex entities in the world—over $1 trillion in reported assets, 3 million military and civilian personnel, and outlays of about $260 billion for fiscal year 1997. It has acknowledged responsibility for the world’s largest dedicated infrastructure, reporting that its physical plant has an estimated value of about $500 billion. In addition, based on data provided by DOD, the department has a network of approximately 32,000 financial management personnel, including the positions held by the 1,409 key financial managers responding to our prior surveys. These 1,409 financial management positions are assigned not only to the Office of the DOD CFO (the Under Secretary of Defense (Comptroller)) and to DFAS—the DOD “accounting firm,” but also to financial or budget components in the military services. Adding to the difficulty of carrying out financial operations in DOD is the continuing effort to downsize its operations. DOD has a vast number of financial management systems. In its 1997 report to the Office of Management and Budget (OMB), DOD reported that it had 156 financial management systems. However, as we reported, DOD relies on a significant number of other financial management systems and processes operated by DOD entities outside the DOD Comptroller’s organization, such as acquisition, logistics, and personnel, that provide financial data to DOD’s accounting systems. These “mixed” systems are part of the financial systems network at DOD. Further exacerbating the task of DOD financial personnel operating with such a large network of systems are the systems’ seriously deficient processes and controls. For example, the DOD Inspector General recently concluded, in part, that DOD’s financial management systems did not comply with federal accounting standards. The diversity, complexity, and size of even the largest private sector corporations pale in comparison to DOD. For example, the company that ranked first in Fortune’s April 1998 list of the 500 largest companies showed assets of about $230 billion, less than 25 percent of DOD’s reported assets for fiscal year 1997. The 1995 revenues of the largest of the Fortune 100 corporations responding to our recent study on financial managers were about $80 billion and the 1993 revenues of the largest state responding to that same study were about $110 billion. In contrast to the largely deficient financial network with which DOD’s financial personnel have been hamstrung for decades, effective, disciplined financial operations have been in place in the private sector and state governments for many years. Specifically, the disciplined process required to generate reliable, accurate financial data has been in place in the private sector for over 60 years following the 1929 stock market crash. In state governments, this disciplined process was enhanced by the passage of the Single Audit Act of 1984. In comparison, former Secretary of Defense William J. Perry stated in the 1995 Annual Report to the President and the Congress that the department’s manifold financial management failures reflect a complex, multifaceted, and antiquated bureaucratic organization structure. The size and complexity of DOD’s financial organization notwithstanding, audit reports over the past few years have cited personnel deficiencies, such as the lack of accounting experience or competencies, and inadequate training, as one of the causes of DOD’s serious financial management deficiencies. For example, in our March 1996 report on the results of our financial review of the Navy, we recommended that the Navy and DFAS take action to upgrade the experience of financial managers. In this regard, we cited numerous examples of Navy and DFAS personnel not performing routine required reconciliations or investigating and resolving unusual trends in large year-to-year account balance variations. More recently, in October 1997, in the course of its work on the department’s working capital funds, which are intended to operate on a businesslike basis, the DOD Inspector General noted continuing pervasive weaknesses in the personnel area, including incomplete or no training, insufficient management oversight, and an inability to respond to a rapidly changing accounting environment. The Inspector General also pointed out the critical link between training and the successful introduction and use of new accounting systems. In addition, the DOD Inspector General reported a widespread failure of accounting personnel to understand basic accounting theories and principles that support transaction entries. The key legislative initiatives affecting financial reform efforts in DOD and other federal agencies include the following. Chief Financial Officers Act of 1990 and Government Management Reform Act of 1994 (GMRA). Together, these acts charge the DOD CFO with, among other things, (1) directing, managing, and providing policy guidance and oversight of all agency financial management personnel, activities, and operations and (2) overseeing the recruitment, selection, and training of personnel to carry out agency financial management functions. Under this legislative mandate, DOD is to annually prepare and have audited DOD-wide and major component—including Army, Navy, and Air Force—financial statements, beginning with fiscal year 1996. The auditors’ reports provide an annual public scorecard to measure agencies’ progress in improving financial management. Government Performance and Results Act of 1993 (GPRA or “the Results Act”). The Results Act is intended to improve the efficiency and effectiveness of federal programs by establishing a system to set goals for program performance and to measure results. To the extent that DOD measures the efficiency of its operations, such measures are dependent upon accurate cost information. Federal Financial Management Improvement Act of 1996 (FFMIA). This act provides a legislative requirement to implement and maintain financial management systems that substantially comply with federal financial management systems requirements, applicable federal accounting standards, and the standard general ledger. In meeting these requirements, DOD will be required to implement new, evolving accounting standards, as discussed below, as well as the federal financial management system requirements established by the Joint Financial Management Improvement Program (JFMIP). DOD financial management personnel face a considerable challenge in meeting the act’s provisions because few of DOD’s systems meet federal financial management systems requirements and DOD has not yet comprehensively identified and assessed all of its financial management systems. Another critical challenge for DOD’s financial management personnel is the recently issued accounting standards—which represent enhancements to previous standards—that are currently being implemented (see appendix II for a listing of the standards). If properly implemented, these standards will provide the impetus not only for the department to improve its financial management operations and reporting, but also to strengthen its ability to meet critical mission objectives because more accurate information will be provided to decisionmakers. Neither DOD nor the military services have been able to withstand the scrutiny of a financial statement audit. In its disclaimer of opinion on DOD’s consolidated financial statements for fiscal year 1997, the DOD Inspector General stated that although progress continues, significant deficiencies in the accounting systems and the lack of sound internal controls prevented the preparation of accurate financial statements. In addition, the Inspector General stated that the accounting data were not reliable and the DOD Inspector General was unable to satisfy itself that the data were accurate and complete. Only about half of the DOD key financial managers responding to our surveys had taken any accounting or other technical training related to their career fields in the 2 years we reviewed. Moreover, DOD has not established an annual training requirement for its financial management personnel. More than three-fourths of the state government and private sector company respondents to our survey said that they encouraged their financial managers to take training. It is also noteworthy that several of the state government and private sector respondents indicated that they had designed their training programs, in part, in recognition of the training requirements that existed for holders of professional certifications. In addition, some state government and private sector company respondents had established annual training requirements for their financial managers. Our recent studies showed that 53 percent of DOD’s key financial managers responding to our survey did not receive any accounting or financial training during calendar years 1995 and 1996, the 2-year period covered by our survey. As shown in appendix II, seven of the eight new federal financial accounting standards were issued either prior to or during that 2-year span. Furthermore, as discussed previously, the Federal Financial Management Improvement Act, which has major implications for financial managers, was passed in 1996. If DOD is to fully and effectively implement this legislation, its financial personnel must keep abreast of existing and evolving technical federal financial management system requirements. As table 1 shows, 32 percent of DOD financial managers received only general training, which included topics such as computers and supervision. Moreover, an additional 21 percent of DOD respondents did not receive any training during 1995 and 1996. Nearly 75 percent and 90 percent of state government and private sector respondents, respectively, commented that they encouraged their employees to obtain training. Some of these state government and private sector respondents had established training requirements for their financial managers. In addition, several organizations noted that their programs were designed, in part, in recognition of the training requirements that existed for holders of professional certifications. For example, to keep a Certified Public Accountant (CPA) certificate current, 46 of 50 states require individuals to annually obtain at least 40 hours of technical training. Among state government and private sector respondents, 31 percent and 29 percent, respectively, reported having a professional certification (26 percent of the DOD respondents held at least one certification). About a third (33 percent) of the state governments we surveyed had specific financial management training requirements. Those states with such requirements had, on average, 36 hours of training required in 1996, including 26 hours in technical accounting training. Similarly, about a third (35 percent) of the private sector companies had specific financial management training requirements. Those respondents had, on average, 31 total hours of required training in 1996, including 18 hours in technical accounting. In order to equip employees to deal with rapidly changing management and business practices and requirements, the government has put in place specific training requirements intended to enhance the professionalism of other disciplines. For example, government auditors, including those at DOD, are subject to Government Auditing Standards, which require all audit organizations to have a program to ensure that their personnel maintain professional proficiency through continuing education and training. Under these requirements, each auditor responsible for planning, directing, conducting, or reporting on audits must complete, every 2 years, at least 80 hours of continuing education and technical training in subjects that contribute to the auditor’s professional proficiency. “DOD’s acquisition specialists . . . are challenged today as never before by the rapidly changing environment in which they must function. The pace of efforts to reform basic acquisition systems, reengineer federal operations, and replace traditional management structures with teams and matrixed organizations, coupled with downsizing and the information technology revolution, has resulted in continuously evolving work environments and requirements. To meet performance expectations in such environments, acquisition personnel must be current with reforms and trends, adaptable, flexible, and willing to learn new skills.” In response to the Defense Acquisition Workforce Improvement Act of 1991, DOD is implementing a new policy requiring acquisition professionals to participate in continuous learning activities that enhance and supplement the minimum standards for their career fields and specific acquisition assignments. The intent of this initiative was to help ensure that DOD’s acquisition workforce maintains currency in acquisition reforms and disciplinary and functional specialties, while developing multifunctional technical and leadership skills. Under this program, personnel must earn the equivalent of a minimum of 80 continuing professional education hours every 2 years by participating in a variety of activities, including functional, technical, or leadership training; academic course work; experiential and developmental assignments; and professional activities related to their functional areas. In meeting the requirements for this program, emphasis is to be placed on maintaining currency in acquisition functional areas, acquisition reform subjects, other emerging acquisition policy areas, and the individual’s own basic discipline or technical field. The Secretary of Defense recently recognized the importance of upgrading training for the civilian workforce across all disciplines in DOD. In his 1997 “Defense Reform Initiative: The Business Strategy for Defense in the 21st Century,” Secretary of Defense Cohen stated that DOD considers itself to be a world-class organization despite rendering second-rate education, training, and professional development to its civilian employees. He added that among the lessons learned from corporate America is that every successful organization finds its people to be its most important asset, and reflects their importance in a strong, corporate-sponsored program of continuous training and professional development. He also stated that DOD must aspire to world-class educational standards. The Secretary stated that the department will establish a Chancellor for Education and Professional Development. The Chancellor will be responsible for developing and administering a coordinated program of civilian professional education and training throughout the department; establishing standards for academic quality; eliminating duplicative or unnecessary programs and curriculum development efforts; and ensuring that DOD education and training responds to valid needs, competency requirements, and career development patterns. He added that the Chancellor will be responsible for operating through a consortium of DOD institutions offering programs of professional development, which is similar to the approach in the defense acquisition area. Under the Secretary’s recent reform initiative, the DOD Chancellor for Education and Professional Development will have overall responsibility for overseeing training of all DOD civilian personnel. Under the CFO Act and related OMB guidance, agency CFOs are to “direct, manage, and provide policy guidance and oversight of agency financial management personnel . . . including . . . the recruitment, selection, and training of personnel to carry out agency financial management functions . . .” and should have the authority to provide agencywide policy advice on the training of all financial management personnel to ensure a cadre of qualified financial management professionals throughout the agency. In line with this mandate, the DOD CFO would be the focal point to coordinate with the DOD Chancellor for Education and Professional Development on training needs for DOD’s large network of financial personnel (both civilian and military personnel). Although DOD has not yet established a coordinated agencywide training program for its financial management personnel, there are a number of initiatives planned or underway throughout DOD that are intended to enhance the professionalism of the financial management workforce. For example, DFAS officials informed us that beginning with fiscal year 1997 they have centralized control over training funds at DFAS headquarters and have allocated 3 percent of its budget for training its financial management staff—an amount within the range of that spent for training reported by state government and private sector respondents. It is particularly encouraging that DFAS is currently finalizing a Financial Management Career Development Plan for its employees that outlines areas of needed expertise by occupational series. DFAS plans to implement this “comprehensive framework to establish flexibility, development, and advancement of the DFAS workforce” during fiscal year 1998. The plan calls for job series-specific competencies and recognizes that these competencies may be obtained through a combination of education, training, and work experiences. According to DFAS, a number of sources were considered in developing these competencies, including prior studies by JFMIP and the Office of Personnel Management. In addition, according to a DFAS training official, DFAS contracted with the Office of Personnel Management to obtain assistance in developing the overall career development concept and in obtaining data on core competencies related to DFAS job functions. Within the plan, DFAS recognized the value of professional certifications to workers as a means of achieving expertise and excellence in their fields and as a means of encouraging employees to continue their education and hone their professional skills. The plan represents a good start—it demonstrates a growing DFAS understanding of the importance of and commitment to training. But, the plan could be improved in several critical areas. For example, the plan does not specifically address minimum annual training requirements, including a recognition that the majority of the training must be in technical accounting or other related financial management areas; the key competencies associated with knowledge of accounting concepts, such as the statements of federal accounting standards, and JFMIP’s systems requirements; how the general courses/subject areas will be linked to specific training courses that can be used to attain an identified competency; and how the competencies and developmental activities identified will be applied to both new hires and individuals currently on-board by job series and grade level. Financial management personnel in the military services will not be subject to the DFAS plan; although, according to DFAS training officials, DFAS financial management courses will eventually be available to financial management personnel in the military services. In addition, officials from each military service told us that they have or are developing their own individual programs for their respective financial managers. The military services’ efforts to improve the skills of their financial management personnel include (1) an Air Force professional development guide for its financial management and comptroller officers, which provides information on career broadening, formal training, and professional development, (2) an Army initiative intended to improve its personnel capabilities with respect to information technology, workforce effectiveness, financial management tools, funds management, and resource management, and (3) a Navy effort to revise its training program for its civilian financial management workforce to address financial management competencies. However, according to military service officials, these planned or ongoing initiatives to enhance the military services’ financial management personnel do not yet include requirements for a structured, formalized training program with an annual training requirement for financial managers or for consideration of professional certifications as a means of ensuring continual training. However well-intentioned the DFAS and military plans for upgrading their key financial management personnel, they do not provide the departmentwide perspective called for by the Secretary’s reform initiative or in the CFO Act. The department has not yet named a Chancellor for Education and Professional Development. Such departmentwide focus and accountability for overseeing the development and implementation of a comprehensive training program for DOD’s financial personnel, along with personnel in other disciplines, is critical if DOD is to avoid potential duplication and ensure proper coordination among all training and professional development programs. We are encouraged by the recognition of the importance of training for civilian personnel across all professional disciplines in the department and for DOD’s financial community, in particular. Various DOD organizations have initiatives planned or underway that are intended to enhance their financial management training programs. When appointed, the DOD Chancellor for Education and Professional Development must work closely with the DOD CFO, as the designated focal point under the CFO Act for the department’s financial personnel and associated financial training, to ensure the implementation of a comprehensive, coordinated training program for financial management personnel throughout the department. By building into these efforts the lessons learned from state government and private sector entities’ experiences, DOD can better move toward developing and maintaining a well-trained, experienced, and innovative cadre of financial managers. Such a well-trained cadre will be necessary if the department is to address its decades-old legacy of deeply entrenched, serious financial weaknesses. We recommend that the Secretary of Defense ensure that the Under Secretary of Defense (Comptroller) and the Director of DFAS modify the DFAS Financial Management Career Development Plan to include the following. Minimum annual training requirements, the majority of which should be in technical accounting or related financial management training courses. Key competencies associated with knowledge of accounting concepts, such as the Statements of Federal Financial Accounting Standards, and JFMIP’s systems requirements for all DOD financial management job series. A specific curriculum that provides a linkage between general courses and/or subject areas to specific training courses that can be used to attain an identified competency. Procedures to ensure that both new hires and current financial management staff attain relevant competencies. In addition, we recommend that the Secretary of Defense ensure that the Under Secretary of Defense (Comptroller) develop and implement a formalized, structured training program for financial management personnel throughout the department that takes into account the DFAS Financial Management Career Development Plan and those initiatives that are either underway or planned in the military services. This program should be developed in conjunction with the DOD Chancellor for Education and Professional Development. In written comments on a draft of this report, DOD agreed with the general conclusion presented in the report regarding providing a strong emphasis on training as a means of upgrading workforce knowledge of current financial management, accounting, and reporting requirements. However, DOD did not fully agree with our recommendations regarding minimum annual training requirements and a formalized, structured training program for financial management personnel throughout DOD. In regard to our first recommendation that the DFAS Financial Management Career Development Plan be modified to include minimum annual training requirements, DOD suggested that requirements be changed to goals. In support of this modification, DOD stated that factors such as a lack of training funds can impede an employee’s ability to attain all required training within a specified period. We continue to recommend that minimum training requirements be established. Given the poor training record of DOD financial managers in the past—as we reported, 53 percent of our survey respondents had received no accounting or financial training over a 2-year period—and the range of new accounting and systems issues to be mastered, it is imperative that training requirements be established as soon as possible. Without stated minimum requirements and the strong commitment to training that they would represent, it is unlikely that the objectives of DFAS’ Career Development Plan will be achieved. Furthermore, if the department is committed to providing training and enhancing the quality of its financial management workforce, adequate funding will be made available for training. DOD also stated that tracking achievement of required training would be extremely difficult to implement. We disagree with DOD’s position. Some level of tracking is obviously necessary to monitor staff progress in meeting the requirements; however, developing and implementing a tracking system need not be a difficult task. For example, in the acquisition community, the planned tracking is the responsibility of the approximately 100,000 acquisition employees and their supervisors. Central management oversight tracking is being planned at the component level (about 50 components throughout DOD have acquisition personnel), not at the individual level. For DOD Inspector General audit staff, fulfillment of training requirements is monitored centrally and periodic reports are provided to staff and supervisors which show the status of training received at that point in the 2-year period. Also, although DOD’s response refers to the difficulty in levying penalties for employees not meeting annual training requirements, our recommendation does not call for DOD to levy penalties. The overall purpose of these recommended requirements is not to penalize staff or to create an adversarial relationship between the employee, the supervisor, and the organization, as DOD’s response implies. Rather, the purpose is to ensure that as many financial managers as possible are provided the up-to-date, technical training they need to carry out their responsibilities. Other functions that currently offer a continuous learning environment do not seek to penalize staff. For example, the DOD Inspector General’s office requires its audit staff to complete 80 hours of training over a 2-year period. A grace period of 2 months is provided for staff to complete this requirement if training has not been completed at the end of the 2-year cycle. In the acquisition community, the plans allow for waivers to be granted if a staff member is unable to meet the stated standard after an initial 3-month grace period. The waivers can be extended for an additional 2 years, if specified conditions are met. In addition, DOD’s response states that it may not be feasible or beneficial to specify training requirements for all financial managers at all levels. The DFAS plan, which our recommendation addresses, specifies those financial management job series and grade levels that are covered by the plan. The job series are Accounting (GS-510), Auditing (GS-511), Budget Analysis (GS-560), and Financial Administration and Program (GS-501). The grade levels are GS-7 through Senior Executive. The plan states that, as of July 1996, these job series and grade levels include nearly 5,500 positions at DFAS. These and comparable military positions are generally the same as the key financial managers included in our surveys, which we identified in conjunction with DOD. Further, DOD did not fully agree with our recommendation that the Under Secretary of Defense (Comptroller) develop and implement a formalized, structured training program for financial management personnel throughout the department in conjunction with the DOD Chancellor for Education and Professional Development. Rather, DOD stated that DFAS will be charged with developing a generic plan that can be used as a model, but that individual organizations should be allowed flexibility in implementing the career management plans. While it is appropriate for DFAS to develop a model or baseline plan for DOD’s financial management staff, the role of the Under Secretary of Defense (Comptroller) is critical in both the development and implementation of formalized, structured training programs for financial management personnel throughout the department. Although individual organizations may have to tailor such programs for the specific needs of their staffs, the oversight of the Comptroller is crucial to ensure that such training is consistent and as current as possible throughout the department. Moreover, as stated in the report, the CFO Act specifically charges agency CFOs with responsibility for overseeing the training of personnel to carry out agency financial management functions. In addition, regarding our recommendation that these training programs be developed in conjunction with the DOD Chancellor for Education and Professional Development, the Chancellor would not be able to carry out the responsibilities defined in the Secretary of Defense’s 1997 Defense Reform Initiative if he or she were not involved in the coordination and administration of the training of DOD’s financial managers. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days of the date of this report. You must also send a written statement to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made over 60 days after the date of this report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs; the House Committee on Government Reform and Oversight and its Subcommittee on Government Management, Information, and Technology; and the Director of the Office of Management and Budget. We are also sending copies to the Director of the Defense Finance and Accounting Service; the Assistant Secretaries of the Air Force, Army, and Navy (Financial Management and Comptroller); and to the Under Secretary of Defense (Comptroller). Copies will also be made available to others upon request. If you have any questions about this report, please contact me at (202) 512-9095. Major contributors to this report are listed in appendix IV. In developing the information for this report, we drew upon the data gathered and summarized in each of the individual reports on the qualifications of financial managers in the Office of the DOD Comptroller, the Air Force, the Army, the Navy, and DFAS. We also drew upon the information gathered from our audit of the state governments and private sector companies that may be useful to DOD in assessing changes needed to enhance its financial management workforce. The audit work for these six GAO audits was performed in accordance with generally accepted government auditing standards from June 1996 through March 1998. To supplement the earlier reports, we obtained information from each service and DFAS about their plans to augment or improve the qualifications of their financial management personnel. We also reviewed prior audit reports from GAO and the DOD Inspector General. We requested written comments on a draft of this report from the Secretary of Defense or his designee. The Under Secretary of Defense (Comptroller) provided us with written comments. These comments are reprinted in appendix III and are discussed in the “Agency Comments and Our Evaluation” section. Our recent reports covering DOD, the military services, and DFAS focused on those military and civilian personnel that fill key financial management positions. DOD and military service officials helped us identify 1,409 key financial management positions to be included in the surveys out of approximately 32,000 financial management positions throughout DOD. The positions surveyed most often included comptrollers, deputy comptrollers,and budget officers from the military services and accounting and finance managers from DFAS. Individually prepared responses were received from 884 (63 percent) of those surveyed. Table I.1 shows the breakdown, by agency, of the population of financial managers and of the respondents included in the surveys. For state governments and Fortune 100 companies, we requested information on the qualifications of key financial management personnel from organizations closest in size and complexity to federal agencies. These organizations included the 25 largest state governments (based on revenues received in 1993, which was the latest available information at the time of the survey) and the 100 largest private corporations in the United States (based on 1995 revenues as reported in the April 29, 1996, issue of Fortune, which was the latest available issue at the time of the survey), commonly referred to as the “Fortune 100.” For the state governments, surveys were sent to the 25 state CFO/Comptroller offices. The 1993 revenues of the state governments responding to our survey ranged from $10.8 billion to $108.2 billion. Responses were received from 19 states, including 18 state comptroller offices (or their equivalent) and 67 operational departments within 19 of the surveyed states (one state comptroller office did not respond although one of its departments did respond). The responses, which represented 1,127 state government financial managers, were prepared and submitted by the various state government offices. For the Fortune 100 companies, surveys were sent to the CFO/Comptroller offices. The responding companies represented nearly all major industry groupings. The 1995 revenues of the private sector respondents ranged from $12.7 billion to $79.6 billion. Responses were received from 34 Fortune 100 companies and from 54 divisions or subsidiaries of these companies. The responses, which represented 3,450 private sector financial managers, were prepared and submitted by the various corporate offices. The following is GAO’s comment on the Department of Defense’s letter dated June 16, 1998. 1. Discussed in the “Agency Comments and Our Evaluation” section of the report. Glenn D. Slocum, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO reviewed the lessons learned from the results of its survey of selected large state governments and private-sector corporations that the Department of Defense (DOD) could use to augment its existing plans to upgrade the competencies of its key financial managers. GAO noted that: (1) a key lesson learned from its survey data is that many state government and private-sector organizations place a strong emphasis on training as a means of upgrading workforce knowledge of current financial management, accounting, and reporting requirements; (2) on average, key financial managers in the surveyed large state governments and private-sector organizations received 31 hours and 26 hours of training, respectively, in 1996--most of which was in technical accounting subjects; (3) some of the surveyed organizations had established training requirements for their financial personnel; (4) also, several organizations noted that their programs were designed, in part, in recognition of the training requirements that existed for employees holding professional certifications; (5) these approaches may be useful to DOD in addressing its financial management problems; (6) over half of the key DOD financial managers GAO surveyed--who all held leadership positions throughout DOD's network of financial organizations--had received no financial- or accounting-related training during 1995 and 1996; (7) these key personnel face the challenge of leading DOD's efforts to produce reliable financial data: (a) throughout a large complex DOD organization with acknowledged difficult financial deficiencies; and (b) that build upon existing requirements to include recent, more comprehensive accounting standards and federal financial management system requirements; (8) in addition, full implementation of the Government Performance and Results Act will require DOD financial personnel to provide information on cost data associated with DOD's program results; (9) technical financial- and accounting-related training to supplement on-the-job experiences of DOD's key financial managers is critical to ensuring that such accurate financial data are available; (10) the Secretary of Defense has stated in a recent major reform initiative that while the department is a world-class organization, it is rendering second-rate education, training, and professional development to its civilian employees; (11) moreover, the Defense Finance and Accounting Service (DFAS) is developing a plan intended to identify the kinds of skills and developmental activities DFAS financial personnel need to improve their competencies; and (12) in addition, DOD has not yet established a departmentwide focus with accountability to ensure that efforts to improve DOD's financial managers' training are effectively coordinated with the Secretary's broader training reform initiative.
|
The FSS program consists of 40 schedules providing access to almost 20,000 vendors offering a wide range of goods and services. GSA operates the FSS program under authority contained in the Federal Property and Administrative Services Act of 1949.and a wide range of medical supplies, GSA has delegated to the Department of Veterans Affairs the authority to solicit, negotiate, award, and administer contracts for selected schedules. Figure 1 provides additional information about the 40 schedules maintained by GSA and the Department of Veterans Affairs. In fiscal year 2014, GSA reported its total FSS program sales as $33.1 billion, which, according to GSA officials, includes data not reported in FPDS-NG such as awards under the $3,000 micro-purchase threshold and those made by federal intelligence agencies and state and local governments. According to data we reviewed from FPDS-NG, the federal government obligated $25.7 billion through the FSS program in 2014—a decrease of 19 percent or $6 billion since 2010. This is consistent with overall federal obligations that have declined at roughly the same rate. Since 2010, the proportion of total federal contracting obligations awarded under the FSS program has remained approximately the same—between 5 and 6 percent. While spending on both products and services has decreased under the FSS program, products have declined more than services—30 percent compared to 14 percent—with services making up an increasing proportion of obligations. Services comprised 73 percent of obligations in 2014, up from 69 percent in 2010. Figure 3 depicts the proportion of product and service obligations in relation to overall FSS obligations from fiscal years 2010 through 2014. The FSS program can be used to meet a broad variety of government requirements. Agencies can procure relatively simple items—for example, the sample of orders we reviewed included musical instruments, sleeping bags, foam cups, and copier and printer maintenance. However, our sample also included large and complex procurements—for example, a $123 million order for development of a human resources information system, a $91 million order for in-person consumer support services, and a $66 million order for enterprise-level technology support. In fiscal year 2014, three product or service categories accounted for 70 percent of all FSS obligations: 38 percent ($9.8 billion) were spent on professional, management, and administrative support services; 20 percent ($5.1 billion) on information technology and telecommunications services; 12 percent ($3.1 billion) on information technology products—equipment, software, and supplies. (See figure 4.) The “other” category includes, for example, office supplies, which account for only 0.5 percent ($129 million) of FSS obligations. Part of the appeal of the FSS program is that it provides access to a large pool of potential contractors, but the number of vendors varies significantly by schedule. For instance, in 2014, Schedule 751 (Leasing of Automobiles and Light Trucks) had 8 vendors while Schedule 70 (General Purpose Commercial Information Technology Equipment, Software, and Services) had 4,789 vendors. Each schedule is composed of categories, called special item numbers, that group similar products, services, and solutions together. A vendor with a schedule contract does not necessarily offer all of these categories of goods and services available within that schedule. Therefore, there may be fewer vendors within one schedule depending on the specific category of goods or services being purchased. For instance, while the schedule for human resources and equal employment opportunity services has 276 vendors, only 4 vendors provide the specific category of private shared service centers for core human resources services. We found that, although some schedules have a large pool of vendors, most of the obligations on orders through those schedules go to a smaller subset of vendors. For instance, 308 of the 4,789 vendors under Schedule 70—or 6 percent—received 80 percent of all obligations between fiscal years 2010 through 2014. Figure 5 shows the concentration of work received for the three schedules with the highest obligations in fiscal years 2010 through 2014. According to FPDS-NG data, 75 percent of FSS obligations were coded as competitive in fiscal year 2014. However, only 40 percent of these obligations were on orders for which the government received three or more quotes while 35 percent of obligations were on orders with only one or two quotes. FAR procedures generally emphasize reviewing three pricelists, soliciting three quotes, or attempting to obtain at least three quotes from vendors, and orders placed that follow these procedures are The FAR also considered to be issued using full and open competition.describes steps to take when awarding noncompetitive orders. Twenty-five percent of obligations were coded as noncompetitive of which 49 percent were coded as orders where only one source could perform the work (see figure 6). Our selected sample of 60 FSS orders included 23 competed orders— 8 below and 15 above the SAT—for which agencies received three or more quotes or reviewed three pricelists—a number frequently mentioned in the FAR. The agencies obtained three or more quotes from vendors using a variety of approaches. For proposed orders below the SAT, according to the FAR, the contracting officer must either review pricelists from, request quotes from, or provide the solicitation to at least three schedule contractors. For instance, we found: A GSA contracting officer awarded a $10,000 order on behalf of the Air Force for office file equipment by reviewing prices listed on GSA’s website, requesting—and receiving—quotes from four vendors, and then comparing the prices they offered to select the vendor with the lowest price. The Army placed a $23,000 order for steamers and fryers by posting a solicitation on eBuy for 5 days. The contracting officer, who received four quotes in response, selected the lowest-priced quote that was technically acceptable. Contracting officers used similar methods to obtain three or more quotes above the SAT. For these orders, the government must either post a request for quotation on eBuy—which provides all eligible vendors access to the solicitation—or take measures to reasonably ensure that agencies receive at least three quotes from contractors that can fulfill the requirement—such as sending the solicitation directly to a subset of vendors via email. Contracting officials posted solicitations on eBuy for 12 of the 15 orders in our sample where three or more quotes were received and directly solicited vendors via email for only 3 orders. Examples from our sample that illustrate these methods include: The Army used eBuy to compete a $19.9 million order for communications support and infrastructure in Kuwait and Afghanistan. Nine vendors submitted quotes and the vendor with the lowest-priced quote that was technically acceptable won the award. To compete a $19.9 million order for technical and professional information technology services at NIH, the contracting officer emailed the solicitation to 22 potential vendors, 12 of which submitted quotes. NIH found 4 of these technically acceptable, but 1 of these 4 did not comply with the terms of the solicitation, leaving 3 competitors. Competitors were evaluated on past performance, technical capability, and price. In some cases, although three or more quotes were received, after preliminary evaluation, agencies actually considered fewer than three quotes either because some of the quotes were not technically acceptable or because they did not comply with the terms of the solicitation. For example, GSA, on the behalf of the Army, received 10 quotes in response to a solicitation for a $66 million award for information technology support. However, 9 quotes were deemed not technically acceptable and only 1 quote was considered eligible for award. Agencies received only one or two quotes for 23 of the 60 FSS orders in our sample. We identified various factors that may have influenced why agencies received fewer quotes. According to FPDS-NG data, in fiscal year 2014, HHS received only one or two quotes for orders accounting for 51 percent of its total FSS obligations. This compares with 35 percent government-wide. DOD and GSA received one or two quotes for 35 and 32 percent of total FSS obligations, respectively. The higher percentage of FSS obligations where HHS received only one or two quotes suggests that HHS may be missing opportunities to maximize competition, as specified in the FAR. Our sample included nine orders for which HHS received only one or two quotes. For eight of these nine HHS orders, we found HHS officials narrowed the pool of competitors by soliciting six or fewer selected vendors, in contrast to the other agencies in our review that favored eBuy.award a $2.4 million order for logistical and administrative support, contracting officials conducted market research to identify five vendors For example, to that were capable of performing the work and emailed the solicitation only to these vendors rather than releasing it to over 100 potential vendors via eBuy. The officials explained they could have potentially received hundreds of responses which they would not have been able to adequately review in a timely manner and they were fairly confident that all five vendors would submit quotes. However, only one quote was received in response to this solicitation. Other HHS officials told us that narrowing the number of vendors solicited is a practice that they use to compete common items or services. Although this practice is consistent with FAR procedures, the relatively higher percentage of HHS obligations on orders for which the government received only one or two quotes in fiscal year 2014 suggests that HHS contracting officers may not be putting enough emphasis on ensuring that three or more quotes are received when competing orders. Further, in two of these cases, officials missed an opportunity to evaluate whether this practice of limiting the number of vendors solicited was a reasonable approach before proceeding to award the orders. For proposed orders above the SAT, if requests for quotations are not posted to eBuy, agencies must write a determination explaining the efforts made to obtain at least three quotes from contractors and that no additional contractors capable of providing the necessary goods and services could be identified despite reasonable efforts to do so. In two cases at HHS, contracting officials did not prepare these determinations and told us they were not aware of this requirement. Although this issue was not widespread within HHS, it suggests that HHS officials may not be fully aware of their responsibilities with respect to FSS ordering procedures. Our sample also included another 14 orders for which agencies received only one or two quotes, 8 awarded by the Army and 6 awarded by GSA. In almost half of these cases, we found that one reason the government received fewer quotes was because the goods and services needed by the agency were only provided by a small number of vendors. For example: A $446,000 Army order to replace portions of a targeting system was solicited on a brand name or equal basis—meaning that the agency sought either a specific brand or items with the same features as the brand name. According to a contracting official, typically only the vendor that provided the system could perform repairs. However, because in this instance larger portions were being replaced—rather than just repaired—officials thought there was a possibility of obtaining more than one quote. Therefore, they competed the order rather than awarding it noncompetitively, but only the incumbent submitted a quote. In another case, GSA awarded a $123 million order to provide a web-based human resources solution. According to the acquisition plan, only four vendors offered this category of goods and services under their schedule contracts, so at most four vendors could have responded to the solicitation, which was posted for 45 days. However, ultimately only two vendors responded. The contracting officials selected the winning vendor based on technical approach, management approach, past performance, and price, in accordance with the solicitation. We also found one example where the Army made errors in the solicitation process, which may have limited competition. In this case, we reviewed a $23,000 Army order for a filing system where the Army issued the solicitation using a reverse auction process that did not solicit any vendors that sold the type of item being purchased.mistakenly specified a brand name item only sold by one vendor that was not included among the small businesses notified of the reverse auction. Contracting officials stated this should not have been limited to the brand name as they are aware of other vendors on the GSA schedule that sell similar products. The Army received no bids in the reverse auction and ultimately placed the order with the brand name manufacturer under the vendor’s FSS contract. In some cases, contracting officials told us that they did not know why more vendors did not submit quotes. In 2014, we recommended that DOD establish guidance for contracting officers to assess and document the reasons only one offer was received on competitive solicitations to enhance competition. DOD implemented this recommendation by requiring contracting officials to ask vendors who had expressed an interest in the solicitation why they did not submit an offer. In the 60 FSS orders we reviewed, we found that the relationship between the length of time vendors had to respond to competitive solicitations and the number of quotes received varied depending on the individual circumstances of each award. We identified examples where three or more quotes were received even though only a short time was allowed for responses. For example, an Army solicitation for a $367,000 order for dining facilities equipment was open for 3 days and five technically acceptable quotes were received. Conversely, for a $90 million order associated with the Affordable Care Act, HHS sent a solicitation to five vendors, initially giving them 19 days to submit quotes. One vendor requested an extension to the solicitation period, which the agency declined. Most of the vendors informed HHS they did not plan to submit quotes, with three noting the short solicitation time period as a reason. Near the end of the solicitation period, HHS extended the solicitation by 17 days. Ultimately, two vendors submitted quotes, but the vendor that had initially requested more time did not and informed HHS that the extension was granted too late in the process to allow them adequate time to prepare a quote. To maximize savings that are obtained through competition, in 2010, DOD directed that when only one offer is received in response to a competitive solicitation that was open for fewer than 30 days, generally contracting officials should take additional steps to promote competition. These steps include readvertising solicitations for at least 30 additional days. We found one Army solicitation that was open for fewer than 30 days but not resolicited, contrary to DOD regulation. Army contracting officials attributed this lapse in proper procedure to customer pressure to place the awards rather than resolicit and also to inexperienced staff. In May 2014, we reported that contracting officers with whom we spoke generally did not believe that the length of time a solicitation was open was a factor in receiving only one offer, particularly because all of the In that contracts we reviewed for that report were open for 30 days.report, we found that acquisition planning activities could help encourage multiple offers and we recommended that DOD ensure that existing acquisition planning guidance promotes early vendor engagement and allows both the government and vendors adequate time to complete their respective processes to prepare for competition. We reviewed 14 orders—5 each at Army and GSA and 4 at HHS—that were awarded noncompetitively. We found that each of these noncompetitive orders was based on reasons allowed by the FAR—only one source is capable of meeting the need, urgency, or follow-on. For example: GSA awarded a $32.1 million award for the renewal of licenses for imaging applications used in Air Force platforms on the basis of there being only one source capable of meeting the need. According to the justification, only one vendor makes the software and the pursuit of alternatives would cause delays associated with testing and training, as well as duplication in costs. The Army awarded a $317,000 order on the basis of an urgent and compelling need to provide support to soldiers deployed in Hurricane Sandy relief efforts because the vendor was already on-site and competitive procedures would have resulted in unacceptable delays. GSA awarded an $18.6 million order on behalf of the Army as a follow-on to a previously competed order to provide testing and engineering support services. The Army intended to compete this work but, after the competition was delayed, GSA awarded the order noncompetitively to prevent a lapse in service. For noncompetitive orders above the SAT, the FAR requires that agencies prepare a written justification that includes specific content, including an explanation of why the government could not compete the award, and make the justification publicly available. We reviewed 13 noncompetitive orders above the SAT. In one case—a $505,000 HHS order for maintenance of biological specimens—HHS officials did not prepare the required justification. HHS officials told us this was an oversight because they had prepared a justification for a related procurement that they mistakenly applied to this order as well. For the 12 remaining noncompetitive orders above the SAT, agencies prepared justifications as required, yet we identified a variety of issues with most of these justifications. Issues included: late approval, no evidence that justifications were made publicly available, and citing of incorrect FAR authorities. For example, the justification for a $360,000 Army noncompetitive order for a proprietary closed-circuit television security system used the format for justifications for open market procurements rather than for FSS orders and therefore did not use the correct FAR citation and included a fair and reasonable price determination instead of a best value determination. Further, officials confirmed it was never made publicly available as required. When interviewed, officials expressed confusion about the requirements for noncompetitive schedule orders and stated that they are in the process of developing new training to address this problem. The variety of issues we found suggests that agency officials may not be fully familiar with the ordering procedures for noncompetitive orders, which points to the need for more training and guidance on the current regulations. Our analysis of how agencies assessed prices for the 60 orders in our sample showed that agencies are not paying sufficient attention to prices for goods and services under FSS orders. Although GSA makes a determination that the prices established on FSS contracts are fair and reasonable, prices for the same item or service can vary widely from one schedule contract to the next, making it important that agencies effectively assess prices when the orders are awarded. However, we found that ordering agencies are not always paying sufficient attention. For example, contracting officers did not consistently seek discounts from schedule prices, even in situations when it was required. Although vendors frequently offered discounts for competitive orders, they were less likely to do so when contracting officers did not seek discounts for noncompetitive orders.not on the schedule contract without performing a separate price or cost analysis, as required by the FAR, or did not obtain sufficient information to determine whether the item was on the schedule. Agency officials we spoke with noted that some of these problems stem from inexperienced staff who are unfamiliar with the schedule ordering procedures. GSA has recognized that more needs to be done and is taking steps to increase the information available on prices paid by proposing changes to the data vendors are required to report. When establishing a FSS contract with a vendor, GSA must make a determination that the prices under the contract are fair and reasonable, but FSS prices can vary widely—as they do on the open market—for certain products. For example, schedule prices for one brand of 8-ounce foam cups we examined varied from $17.54 per box of 1000 cups to $50.72 per box of 1000 cups. In the open market, we identified sources selling this brand for $17.34 to $38.99 per box of 1000 cups, although these prices did not account for shipping costs. This amount of variation in prices, particularly for the schedule prices, underscores the need for agencies to ensure that they are obtaining the best value when placing orders under FSS contracts. In the case of the foam cups, which were purchased through an order in our sample, GSA competed a purchase of 610 boxes of cups and received a low quote of $16.10 per box of 1000 cups, which included a discount from the vendor’s FSS price. We also found that vendor practices vary when establishing FSS pricing and discounts, enough so that one vendor may offer a larger percentage discount than another, but the deeper discount does not always lead to the lowest price. Contracting officers told us that some vendors set their FSS price as a ceiling and routinely discount prices for orders, while others offer their best price upfront on the FSS contract and do not offer subsequent discounts. For one order in our sample, GSA issued a solicitation for 200 brand name telephones which elicited two viable quotes. While one vendor offered a higher percentage discount from its FSS contract price than the other—35 percent versus 5 percent—the heavily discounted price was still much higher than the less discounted price. The vendor’s quote with only a 5 percent discount was lower by approximately $100 per telephone. In our sample of FSS orders that were over the SAT, we found a significant number of cases—16 out of 45 orders—in which contracting officers did not seek discounts from FSS prices, as required per the FAR. In all 16 cases the ordering activities may have missed opportunities for savings. Contracting officers sought discounts for 26 orders, and we could not confirm whether discounts were sought in 3 cases.vendors offered a discount off the schedule prices for most orders in our sample. (See table 1.) Among the 60 FSS orders we reviewed, we found significant variation in how contracting officials assessed prices. Of greatest concern are a number of instances—9 out of 60—where agency officials paid insufficient attention to pricing by not following procedures to assess the prices for open market items or by not having sufficient information to ensure they were paying schedule prices. Open market items are goods and services that are not on the schedule. For administrative convenience, a contracting officer may include open market items on a schedule order; however, he or she must follow all applicable acquisition regulations, including determining that the price for the open market items is fair and reasonable, and items must be clearly labeled on the order as not on the schedule. When prices for open market items are not assessed, the government is at risk of paying more than it should, in part because GSA has not made a fair and reasonable price determination for these items. Some of the instances in our sample when open market items were not assessed include the following: In an HHS order for $409,000 in laboratory equipment, the contracting officer bought $47,000 worth of open market items without conducting a price analysis. The contracting officer was not aware that she had purchased open market items because the first item listed in the quote was marked as an FSS item so she assumed everything in the order was under the FSS contract. The contracting officer told us she did not compare the vendor’s quote to the vendor’s schedule contract for this order. The vendor told us that they were not aware this order would be awarded under their schedule contract which is why they proposed some open market items. For a $358,000 order for a security system, the Army purchased over $80,000 in open market items without conducting a price analysis even though the vendor clearly marked which items were open market in its quote. Contracting officials stated that they should have conducted more analysis including checking for open market items. Due to a lack of training on procedures for open market items, contracting officials assumed that every item offered was an FSS item. For a $388,000 order for firing range bleacher enclosures, the Army did not assess prices for $114,000 of open market items because the Army accepted a lump sum quote with no breakdown of the FSS and open market components. The Army did not require the vendor to provide sufficient information to identify which items were and were not on the schedule so could not have known that they were purchasing open market items. Standards for Internal Control in the Federal Government state that management needs to identify appropriate knowledge and skills needed for various jobs, and provide needed training, as part of a commitment to competence. Internal control activities—such as policies and procedures—also help ensure that management directives are carried out. Further, the government is at risk of paying more than it should when contracting officers do not ensure they are paying the prices established by GSA. In a $150,000 HHS order to conduct a disease study abroad, the vendor offered and the government accepted labor rates that were not on the vendor’s FSS contract. The vendor told us that the solicitation did not mention use of FSS, so its quote was not based on its FSS contract. Seven months after the order was awarded, the vendor noticed the work was issued under FSS and requested that HHS use the labor rates established in its FSS contract, which were less expensive than the rates offered previously. In many of the orders we reviewed, contracting officers largely depended on GSA’s previous determination that the prices on the vendor’s schedule contract were fair and reasonable, as is described in the FAR. In one instance—a $13.9 million Army award for testing a weapon system— contracting officials compared the one quote received to the vendor’s schedule and noticed that the vendor did not use its schedule pricing. The contracting official asked the vendor to revise and resubmit its quote using schedule prices. We also found instances where contracting officials conducted additional price analysis beyond comparing offered prices to the schedule prices to ensure they were getting the best value. Some contracting officials stated that they conducted additional price analysis because they obtained only one quote and did not have the benefit of multiple competitive quotes for comparison. For example: In a $408,000 GSA order for a docking system, the contracting officer compared the quote to the schedule and found an item’s price was $7,500 greater than the listed schedule price. The contracting officer requested and was granted an adjustment to the price to reflect the lower schedule price. After receiving only one quote, the contracting officer conducted additional price analysis of the vendor’s quote by requesting historical pricing information from previous government contracts, and confirmed that the vendor’s quote was in line with its historical pricing. In a $1.1 million noncompetitive order for office furniture and installation services, GSA contracting officials compared the vendor’s rates against its FSS contract, and identified inclusion of work not listed in the solicitation. At the request of GSA, the vendor corrected this error. To determine whether the vendor’s prices were fair and reasonable, the contracting officers conducted additional price analysis by comparing prices from other FSS vendors for similar or same products as the bidding vendor. When only one offer is received for competitive solicitations, DOD generally requires additional steps to establish that the prices of supplies and services are fair and reasonable. In 2014, DOD established a new requirement for contracting officers to make their own price reasonableness determination for all FSS orders which DOD officials told us was intended to encourage contracting officers to seek better prices. DOD ordering activities may no longer rely only on GSA’s price reasonableness determinations. Currently, ordering agencies generally do not have insight into prices previously paid by other federal agencies for a similar product or service under similar terms and conditions. This limits the government’s ability to fully leverage its buying power. To address this problem, some agencies have tools that provide pricing data for items they purchased previously. For example, an Army contracting officer used the Federal Logistics Information System as a source of information for prices previously paid. However, this information is limited to certain agencies. Without greater insights into past purchasing, agencies risk paying higher prices and missing opportunities to obtain discounts. GSA does not have access to prices being paid for schedule orders because vendors do not currently report this information. Currently, contracting officers must ensure that FSS contracts require vendors to submit quarterly reports to GSA providing their FSS sales totals and the associated fee that GSA is owed. In addition, the price reductions clause requires vendors to provide the government the same price reductions given to their most favored customer. In place of the quarterly reports and the price reductions clause, GSA is proposing revisions to its regulations that will include clauses that would require vendors to provide data on the prices paid at the order level and plans to have agencies use this data to compare prices for similar goods and services. GSA anticipates that the data on prices paid would reduce the risk of agencies paying higher prices; reduce price variation for similar products and services on FSS; and allow agencies to conduct meaningful price analysis and more effectively validate fair and reasonable pricing. For FSS vehicles, the new data reporting requirement would be introduced in phases, beginning with a pilot for select products and commoditized services. GSA officials told us that creating comparable data for services will be challenging and is part of their longer-term initiative. In April 2015, at a public meeting to discuss the proposed regulation change, industry and some government representatives expressed significant concerns about the feasibility of GSA’s proposal and the impact of it on both vendors and the government’s ability to manage risk. Other government representatives expressed support for GSA’s proposed changes. The FSS program is an important tool for agencies to obtain goods and services through a simplified procurement method, but it must be used properly to ensure that the government is obtaining a good price and competition to the maximum extent possible. At the individual order level, some contracting officers are using effective strategies to increase competition or seek discounted prices. However, the extent of issues we found as a result of contracting staff that are not aware of the requirements—from errors in justifications for noncompetitive orders on the schedule, to discounts not being sought when required, to open market items purchased without assessing prices—suggests that more guidance and training are necessary to ensure proper use of the program. When contracting officers do not seek discounts for FSS orders, the government may be missing opportunities for cost savings. When contracting officers do not evaluate prices for open market items, the government cannot be sure that items are being purchased at a fair and reasonable price. Further, the high percentage of obligations on orders for which HHS received only one or two quotes suggests more attention is needed to ensure that that the practice of narrowing the pool of vendors at HHS is not limiting the agency’s ability to receive the full benefits of competition. To help ensure contracting officers follow ordering procedures when using FSS, and to enhance internal controls, we recommend that the Secretaries of DOD and HHS and the Administrator of GSA take the following three actions: Issue guidance emphasizing the requirement to seek discounts and outlining effective strategies for negotiating discounts when using the FSS program; Issue guidance reminding contracting officials of the procedures they must follow with respect to purchasing open market items through the FSS program, including the requirement to perform a separate determination that the prices of these items are fair and reasonable; and Assess existing training programs to determine whether they are adequate to ensure that contracting officials are aware of the ordering procedures of the FSS program, including requirements to 1) properly prepare justifications for noncompetitive awards, 2) seek discounts, and 3) assess prices for open market items included in FSS orders. To help foster competition for FSS orders consistent with the FAR, we recommend that the Secretary of HHS take the following action: Assess reasons that may be contributing to the high percentage of orders with one or two quotes—including the practice of narrowing the pool of potential vendors—and if necessary, depending on the results of the assessment, provide guidance to help ensure contracting officials are taking reasonable steps to obtain three or more quotes above the SAT. We provided a draft of this report to DOD, GSA and HHS. All three agencies provided written comments and concurred with our recommendations to issue guidance emphasizing the requirement to seek discounts, issue guidance reminding contracting officials of procedures for open market items, and assess the adequacy of training programs related to FSS ordering procedures. In addition, HHS concurred with our recommendation to assess the reasons contributing to the agency’s high percentage of orders with one or two quotes. The agency comments are discussed below and reproduced in appendixes IV, V, and VI. HHS also provided technical comments, which we incorporated as appropriate. In its written response, DOD stated that it will issue guidance by the end of July 2015 emphasizing the requirement to seek discounts and reminding contracting officials of the procedures to follow when purchasing open market items. In addition, the agency said it will assess existing training programs to determine whether they are adequate to ensure awareness of FSS ordering procedures by October 2015. GSA stated that it is developing a comprehensive plan to address our recommendations. In addition, the agency described its recent and planned efforts to reduce price variability of FSS contract prices for similar or identical items. Regarding our first two recommendations, HHS stated that it will issue acquisition alerts emphasizing the requirement to seek discounts and the procedures required when purchasing open market items. The agency noted that information from available GSA guides, training and tools will also be included in the alerts and that this information will be integrated into its ePortal, which houses internal policies, guidance and instructions. Regarding our recommendation to assess existing training programs, HHS stated it will issue an acquisition alert reminding contracting officers of the required procedures to comply with federal guidance. In response to our fourth recommendation, HHS stated that it had already begun to assess the reasons contributing to a high percentage of orders for which the agency received two or fewer quotes and the practice of narrowing the pool of vendors. Once the assessment is complete, the agency said it will issue an acquisition alert providing guidance to ensure that contracting officials are taking reasonable steps to obtain three or more quotes. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Defense and Health and Human Services and the Administrator of General Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact William T. Woods at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The objectives of this review were to address (1) how and to what extent the government is using Federal Supply Schedule (FSS) contracts to order goods and services; (2) factors influencing the degree of competition for FSS orders; and (3) the extent to which agencies examine prices to be paid for FSS orders. To determine how the government is using the FSS program, we analyzed data in the Federal Procurement Data System-Next Generation (FPDS-NG), which is the government’s procurement database, to identify total obligations through the FSS program for fiscal years 2010 through 2014, the most recent data available at the time of our review. We adjusted all obligations for inflation using the Gross Domestic Product price index and reported all data in fiscal year 2014 dollars. We included orders off contracts coded as “FSS” and blanket purchase agreement orders coded as FSS blanket purchase agreements to identify those that were awarded through FSS. We compared FPDS-NG’s data on FSS obligations to a list of FSS contracts, including contract numbers, from the General Services Administration’s (GSA) eLibrary. In addition, we compared FPDS-NG data to contract file documentation for a nongeneralizable sample of orders and, based on this comparison, determined that the FPDS-NG data were sufficiently reliable for our purposes. We analyzed data on total federal contracting obligations as well as obligations awarded under the FSS program. Additionally, we reviewed the obligations awarded under the FSS program to assess the obligations on products compared with obligations on services. We did not independently assess whether orders were correctly coded as either product or service obligations. We also analyzed the product and service obligations to determine which product and service categories accounted for the most obligations in fiscal year 2014. We used GSA’s list of FSS schedules and contracts as of April 2014 to determine the corresponding schedule for each contract where possible. However, we were not able to match all orders and calls to a particular schedule. Within each of the three schedules that received the highest overall obligations in 2010 through 2014, we used FPDS-NG data to analyze the overall number of vendors on the schedule, as well as how many vendors received at least one award in the 5-year period and how many vendors received 80 percent of obligations. To assess the extent of competition for FSS orders, we analyzed FPDS-NG data using the field titled “Fair Opportunity/Limited Sources” to categorize obligations by competition status. Competitive orders include those coded in FPDS-NG under “competitive set aside” or “fair opportunity given”; and noncompetitive orders include those coded under “urgency”, “only one source – other”, and “follow-on action following competitive initial action”. In addition, we categorized as noncompetitive orders coded under “minimum guarantee”, “other statutory authority”, and “sole source”. To assess the factors influencing competition and the extent to which agencies examined prices to be paid for FSS orders, we selected a nongeneralizable sample of 60 FSS orders under GSA schedules awarded in fiscal year 2013, the most recent complete year of data when we began our review. In order to allow comparability and price analysis across our sample, we did not include orders under schedules awarded by the Department of Veterans Affairs, which primarily offer medical and health-related items. We also excluded blanket purchase agreements, which do not obligate funding, and the calls upon them, which do not follow the same ordering procedures as FSS orders. Using FPDS-NG data, we identified the three agencies with the highest obligations on FSS orders under GSA schedule contracts in fiscal year 2013: the Department of Defense (DOD), GSA, and the Department of Health and Human Services (HHS). We further identified the components within each of these departments with the highest fiscal year 2013 obligations through FSS orders: the Army at the DOD, the Federal Acquisition Service at GSA, and the Centers for Medicaid and Medicare Services and the National Institutes of Health (NIH) at HHS. Together, the Centers for Medicaid and Medicare Services and NIH obligated approximately the same amount as the Army or the Federal Acquisition Service in fiscal year 2013, so we included both and selected an equal number of orders at each. For logistical convenience and to obtain a variety of products and services in our sample, within the Army, we selected orders at Army Material Command and National Guard Bureau, which are two of the largest Army users of the FSS program. To assess the ordering practices for different types of orders, we selected orders from four categories based on competition status and dollar value, using coding in FPDS-NG. The four competition categories are: (1) orders above $150,000, which is generally the simplified acquisition threshold (SAT), that were competed and for which three or more quotes were received; (2) those above the SAT that were competed and for which one or two quotes were received; (3) those above the SAT that were noncompetitive; and (4) competed orders below the SAT. We did not include awards coded as noncompetitive with values below the SAT because many of the requirements for noncompetitive awards do not apply below the SAT, particularly the requirement for a justification and approval document. We then selected our nongeneralizable sample of 60 contracts by using a combination of cluster and convenience sampling to select 20 orders from each of the three agencies—with 5 orders in each of the four competition categories. To do so, in each category, we selected the two largest orders, using the “base and all options” field in FPDS-NG, and three orders from the middle of the dollar value range by dividing the list of orders into thirds and selecting from the middle third. We selected orders to obtain a mix of product and service categories purchased off a variety of schedules and to include the most heavily used schedules, measured by number of orders and dollars obligated. When reviews of contract files revealed miscodings, we included the order under review in the appropriate category. However, our initial selection included five awards that were blanket purchase agreement calls, but not coded as such in FPDS-NG, and one that was a requisition for which there was no contract file to review. We replaced these six awards with new selections. While we selected 15 orders coded as belonging to each competition category, upon review of contract file documentation, our sample included totals as show in table 2. To assess factors influencing competition for FSS orders, we reviewed contract file documentation for the 60 selected orders and, in some cases, interviewed contracting officials. We reviewed documentation including acquisition plans, market research, award decision and price negotiation memorandums and, where relevant, limited source justification documentation. We also reviewed relevant sections of Federal Acquisition Regulation, Department of Defense Federal Acquisition Regulation Supplement, Department of Health and Human Services Acquisition Regulation, General Services Acquisition Regulation, FSS program guidance, and Standards for Internal Control in the Federal Government to establish criteria for whether agencies took required steps to achieve competition. We also interviewed contracting officials and vendors to obtain perspectives on the degree of competition for the orders, and whether officials had sufficient familiarity with FSS program requirements and procedures to carry out their duties. To determine the extent to which agencies examined prices to be paid for FSS orders, we reviewed contract file documentation for the 60 selected FSS orders, specifically: best value determinations, independent government cost estimates, and award decision memorandums to assess how best value determinations were made. In five cases, where the items were readily available on the open market, we compared the FSS prices paid to the cost of the items on the open market. For all 60 orders, we obtained the GSA schedules that were in effect at the time of award. When documentation allowed us to do so, we calculated the schedule price and compared the established schedule prices to the prices paid for the goods and services in our sample. Additionally, we reviewed documentation on discounts sought and received and, as necessary, interviewed contracting officials and vendors to determine whether the government sought discounts as required by Federal Acquisition Regulation for orders above the SAT. Finally, we interviewed officials at DOD about its new requirement for contracting officers to independently determine price reasonableness for items purchased on the schedule, and at GSA about efforts to obtain new data on prices paid for items purchased on the schedule. We conducted this performance audit from June 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on audit objectives. In addition to the individual named above, Tatiana Winger (Assistant Director), Peter Anderson, Jessica Berkholtz, MacKenzie Cooper, Alexandra Dew Silva, Julia Kennon, Jared Sippel, Roxanna Sun, Alyssa Weir, and Carmen Yeung made key contributions to this report.
|
The FSS program provides agencies a simplified method of purchasing commercial products and services at prices associated with volume buying. In 2011, the FAR was amended to enhance competition on FSS orders. Competition helps agencies get lower prices on products and services and get the best value for taxpayers. GAO was asked to examine competition and pricing for FSS orders. This report addresses (1) how and to what extent the government is using the FSS program, (2) factors influencing the degree of competition for FSS orders, and (3) the extent to which agencies examine prices to be paid for FSS orders. GAO analyzed data from the Federal Procurement Data System-Next Generation on obligations through the FSS program for fiscal years 2010-2014 and reviewed a non-generalizable sample of 60 FSS orders awarded in fiscal year 2013 by DOD, HHS and GSA, the agencies with the highest use of the FSS program. GAO also interviewed officials from these agencies and FSS vendors. According to the General Services Administration (GSA), total sales through the Federal Supply Schedules (FSS) program in fiscal year 2014 were $33.1 billion. This includes purchases by federal, state, and local agencies, including federal intelligence agencies which do not report their FSS spending publicly. GAO's analysis of publicly reported federal procurement data shows that federal use of the FSS program has declined from $31.8 billion in 2010 to $25.7 billion in 2014—a 19 percent inflation-adjusted decrease. This is consistent with the decline in overall federal contracting obligations. The FSS portion of total federal contracting obligations remained steady—between 5 and 6 percent. Most FSS obligations were competed in fiscal year 2014, but only 40 percent of obligations were on orders for which the government received three or more quotes—a number frequently mentioned in the Federal Acquisition Regulation (FAR). These results are influenced by various factors. One factor identified in the orders from the agencies GAO reviewed—the Departments of Defense (DOD) and Health and Human Services (HHS) and GSA—involves situations where few vendors can fulfill agencies' specific needs. HHS had a significantly higher percentage of FSS obligations in fiscal year 2014 on orders that were competed but the agency received only one or two quotes—51 percent—compared to DOD and GSA, which received one or two quotes for 35 and 32 percent of their FSS obligations, respectively. HHS's practice of targeting solicitations to fewer vendors may be contributing to this higher rate. Agencies are paying insufficient attention to prices when using FSS. Ordering agencies did not consistently seek discounts from schedule prices, even when required by the FAR. In addition, GAO found cases in which officials did not assess prices for certain items, as required, or had insufficient information to assess prices. Contracting officials were not always aware of the requirement to seek discounts and told GAO that the need to assess prices was not emphasized in training and guidance. When contracting officials are not aware of these regulations, agencies may be missing opportunities for cost savings. GAO recommends that DOD, HHS and GSA issue guidance and assess training to focus attention on rules related to pricing. DOD, HHS and GSA concurred. GAO also recommends HHS assess reasons contributing to its higher rate of orders with only one or two quotes. HHS concurred.
|
Elder justice can be defined as efforts to prevent, identify, and respond to elder abuse. The following are examples of the types of objectives federal elder justice programs may include: Preventing and identifying elder abuse, such as conducting outreach and public education and investigating allegations of elder abuse. Responding to elder abuse, such as providing counseling, prosecuting elder abuse cases, advocating on behalf of nursing home residents, and offering legal assistance. Providing training and technical assistance related to elder abuse for individuals and agencies. Conducting research related to elder abuse issues, such as the development of information and data systems and identifying the incidence. Federal elder justice programs are administered and funded through a complex intergovernmental structure. The Older Americans Act of 1965 (OAA) established the Administration on Aging (AoA) within the Department of Health and Human Services (HHS) as the chief federal advocate for older Americans5 and assigned responsibility for elder abuse prevention to the AoA. In April 2012, HHS established the Administration for Community Living (ACL), which brought together the AoA, the Office on Disability and the Administration on Developmental Disabilities to better align the federal programs that address the community living service and support needs of both the aging and disability populations, among other things. 42 U.S.C. § 3012(a)(1). Elder justice programs funded by HHS are implemented through the aging services network.6 Authorized by the OAA, the aging services network was developed to help people age 60 and over maintain maximum independence in their homes and communities and to promote a continuum of care for vulnerable older adults. The aging services network is now made up of 56 state aging agencies, 629 area agencies on aging (AAAs), and almost 20,000 service provider organizations, many of which rely on volunteers, that deliver services to older adults.7 Further, the OAA authorizes grants administered by the AoA, within ACL, to fund initiatives for those 60 years of age and older throughout the aging services network, including social services such as home-delivered meals, legal assistance, employment programs, research and community development projects, and training for professionals in the field of aging.8 Elder justice programs supported by the Department of Justice (Justice) also are delivered through the aging services network as well as through other state agencies, local social service and government agencies, and tribal government agencies. The OAA uses the term “aging network” (42 U.S.C. § 3002(5)) but we found the more descriptive term “aging services network” in widespread use. Area agencies on aging are sub-state organizations that can encompass one or more local governmental jurisdictions, such as cities and counties. 42 U.S.C. § 3024. 42 U.S.C. § 3025(a)(2)(A). regional planning and development agencies.10 The rest are located in colleges, community action agencies and other organizations. National Health Policy Forum Background Paper No. 83: The Aging Services Network: Serving a Vulnerable and Growing Elderly Population in Tough Economic Times, December 13, 2011. 42 U.S.C. §§ 1397-1397k-3. Pub. L. No.111-148, tit. VI, subtit. H, §§ 6701-6703, 124 Stat. 119, 782-804 (2010) (codified at 42 U.S.C. §§ 1397j-1397m-5). Pub. L. No. 109-432, § 405, 120 Stat. 2922, 3053. Pub. L. No. 106-386, sec.1209(a), § 40802, 114 Stat. 1464, 1508 (codified at 42 U.S.C. § 14041a). Pub. L. No. 103-322, tit. IV, 108 Stat. 1796, 1902-1955. Pub. L. No. 109-162, § 205, 119 Stat. 2960, 3002 (codified as amended at 42 U.S.C. § 14041a). Violence Against Women Reauthorization Act of 2013, Pub. L. No. 113-4, 127 Stat. 54. See appendix II, table 9. financial abuse is explicitly included in the mission of the CFPB’s recently established Office of Financial Protection for Older Americans.19 Figure 1 shows the federal, state, regional and local service agencies that support and deliver elder justice efforts. 12 U.S.C. § 5493(g)(3). Multiservice organizations are organizations that deliver more than one type of public benefit program, such as congregate meals, Supplemental Nutrition Assistance Program cards, and legal assistance. State attorneys general may also play a consumer protection role. Justice also plays a consumer protection role. Social service agencies include domestic violence and sexual assault victim services providers. HHS and Justice fund elder justice programs through formula grants and discretionary grants to agencies as well as tribal and non-profit organizations in each state. For programs that deliver funds through grants, the state agencies submit applications or agree to meet certain requirements with information provided by regional or local agencies, which may provide the services themselves or contract with independent providers to deliver the services. Some HHS and Justice programs are funded through discretionary grants, which may be awarded directly to any eligible applicant that meets eligibility criteria. Eligible applicants for HHS and Justice elder justice discretionary grant programs may include area agencies on aging, multi-service organizations, district attorneys’ offices, legal assistance groups and postsecondary institutions. Figure 2 illustrates the flow of funding for elder justice programs from HHS and Justice to several types of grant recipients. In fiscal year 2011, HHS and Justice administered 12 programs that directed federal funds toward elder justice programs. Seven of these programs directed all funds toward elder justice in fiscal year 2011; the other five directed some of their funds toward elder justice but supported activities for other purposes, as well. Table 1 lists the 12 programs that directed funds toward elder justice. The fiscal year 2011 obligations reported in table 1 include federal funds only. Because more than one agency is involved in this same broad area of national interest, and the agencies have just begun to coordinate their activities, these programs are fragmented. However, the administration has begun to take steps to enhance coordination which, as we previously reported, can help address some of the problems that arise from a fragmented array of programs supporting the same national interest.21 In addition, we identified 34 other federal programs that supported elder justice indirectly—by providing federal funds for elder justice as an allowable activity(see fig. 3). Because elder justice was an allowable activity in these 34 programs, grant recipients determined whether elder justice activities would be conducted in a given fiscal year.22 As a result, federal program managers could not tell us which elder justice activities to compare for that year, to determine if overlap or duplication occurred, for those 34 grant programs. GAO, Results-Oriented Government: Practices that Can Help Enhance and Sustain Collaboration among Federal Agencies, GAO-06-15 (Washington, D.C.: Oct. 21, 2005), and GAO-12-342SP. See appendix II for more information on 34 programs. The 12 programs supported elder justice activities through a variety of federal funding mechanisms. Ten of the programs supported the federal elder justice effort through grants to a variety of grantees such as states, localities, universities, and other grantees, while the remaining two programs—HHS’ Interpersonal Violence within Families and Among Acquaintances Prevention program and Justice’s Elder Justice and Nursing Home Initiative—did not award grants but administered contracts and interagency agreements. Because few of these programs provided formula grants to all states and most dispersed discretionary grants to limited number of recipients of several types, there is minimal overlap in this area. Specifically, 3 of the 10 grant programs provided a guaranteed base of funding through formula grants to all states; however, each of these programs awarded grants to state agencies for different purposes and the state agency recipients of the HHS grants differed from the recipients of the Justice grant.23 One HHS program awarded formula grants to state aging agencies to fund elder justice prevention and awareness activities and the other program funded assistance to residents of long term care facilities. The Justice program awarded formula grants to state criminal justice agencies to fund elder justice activities as well as other law enforcement and prosecution strategies. Moreover, 7 of the 10 elder justice grant programs required recipients to compete for funding by awarding discretionary grants to a limited number of grant recipients. Three of these discretionary grants provided funding to direct service providers, such as local organizations serving domestic and sexual assault victims, local governments, or courts. The remaining four discretionary grants provided funding to a nonprofit organization and a limited number of postsecondary institutions to operate resource centers for state and local governments or to conduct research related to elder mistreatment. Federal elder justice programs also provided support for a variety of types of victims and potential victims of elder abuse. Nine of the 12 programs provided services to victims and potential victims of elder abuse: 5 provided assistance to all types of victims and 4 focused on key subgroups of victims, further reducing the potential for overlap. For example, 1 of the 4 programs served older female victims, another targeted victims in long-term care facilities, and still another served victims in tribal communities. Table 2 displays the types of victims targeted by the federal elder justice programs we identified. The three formula grant programs were HHS’ Prevention of Elder Abuse, Neglect, and Exploitation program (42 U.S.C. § 3058), HHS’ Long-Term Care (LTC) Ombudsman Program (42 U.S.C. § 3058g), and Justice’s Services, Training, Officer, and Prosecutors (STOP) Violence Against Women Formula Grant Program (42 U.S.C. § 3796gg-1). These programs did not directly serve victims or potential victims directly, rather they engaged in research on elder abuse in general. Moreover, a variety of service providers—organizations that deliver services at the state and local level—were supported by the 12 elder justice programs. In 2011, 8 programs targeted specific subgroups of service providers while 4 programs did not target subgroups of service providers. Even some of the 4 programs that provided support to all types of service providers specialized to some degree. For example, one program supported outreach focused on tribal elders while another focused on elder mistreatment research (see table 3). In addition to serving a range of elder abuse victims and service providers, the 12 programs we identified varied with respect to the activities they supported, with minimal overlap in some activities (see table 4). For example, 4 programs provided victim assistance services. Two programs provided support for conducting investigations of complaints concerning different members of the older adult community, including residents of long-term care facilities and female victims in the community. Three broad categories of activity—education, outreach, and information dissemination; training and technical assistance to service providers; and promoting state and local coordination—were supported by most of the 12 programs we identified. While some of these activities—education and outreach training for service providers, and state and local coordination— appear to overlap, these activities were not necessarily providing similar services to similar populations. For example, one program supported public education and outreach to potential victims to promote financial literacy among older adults and prevent financial exploitation. Another program trained law enforcement officers to recognize and investigate instances of abuse and trained staff in victim services organizations and governmental agencies to understand the role each plays in addressing elder abuse in the community. A third program promoted local coordination among human services organizations, law enforcement, and community development programs just within tribal communities. Further, some of the grant programs identify several allowable activities grantees may conduct using program funds, only a few of which may be elder justice-related, but do not require grantees to conduct all of them. In one case, the sponsoring agency does not track which elder justice projects are funded. For example, in describing the department’s grant administration practices for the STOP Violence program, a Justice official explained that state agencies maintain records of the elder justice projects they fund but Justice does not. Thus, the number of grantees that conduct elder justice activities is not known by Justice. When considering, collectively, the variation in the types of funding mechanisms and grant recipients, the elder abuse victims and service providers targeted by the grants, and the types of activities conducted, we found overlap across the 12 programs was minimal. In addition, the potentially overlapping activities have been cited as being in need of greater federal emphasis, as we will discuss later in this report. For example, public awareness and training for professionals working with elder abuse victims were two of the areas identified by some of the state and local officials we interviewed as those in which increased federal support would be beneficial. Also, previously discussed, we have recommended that coordination among state and local organizations has helped mitigate the effects of increasingly limited resources. Moreover, the key differences noted above, in conjunction with low levels of funding, reduce the risk that two or more area agencies on aging or local service providers would be providing the same services to the same beneficiaries—that is, that they are providing duplicative services. With respect to low funding levels, by the time federal elder justice funds are obligated to grantees nationwide, the amount of funds available to any individual service provider is likely to be low. State officials in all three states we visited said that HHS is the primary federal funding source for their elder justice activities. Under two HHS programs, the Prevention of Elder Abuse, Neglect, and Exploitation program and the Long-Term Care (LTC) Ombudsman Program, the federal government distributes funds to state aging agencies, which then allocate funds to area agencies on aging or local service providers. For example, in fiscal year 2011, of the $5,033,000 total obligations for the Prevention of Elder Abuse, Neglect and Exploitation program, which directs all funds toward elder justice, HHS provided $197,380 to Illinois’ state aging agency. The allocation to one of the state’s 13 area agencies on aging was $14,488, which it then distributed to numerous other local service providers working directly with older adults, such as county human service agencies. In Virginia, HHS provided $118,040 to the state aging agency from the Prevention of Elder Abuse, Neglect and Exploitation program fiscal year 2011 obligations. According to officials at one of Virginia’s 25 area agencies on aging, the allocation to that agency was $3,027 for that fiscal year. GAO, Results-Oriented Government: Practices that Can Help Enhance and Sustain Collaboration among Federal Agencies, GAO-06-15 (Washington, D.C.: Oct. 21, 2005). GAO-06-15. identified practices that can enhance coordination efforts, such as defining and articulating a common federal outcome or purpose agencies are seeking to achieve, consistent with their respective goals and missions. Developing a common federal outcome establishes a rationale for agencies to collaborate that helps overcome significant differences in agencies’ missions, cultures, and established ways of doing business that may lead them to work at cross purposes. HHS and Justice are involved in developing efforts to coordinate their elder justice activities. Most recently, the Elder Justice Coordinating Council (Council) was established under the Elder Justice Act of 2009 (EJA) to address cross-agency coordination of activities relating to elder abuse, neglect, and exploitation.26 According to the EJA, the Council should include members representing HHS, Justice, and other federal entities with responsibilities or programs related to elder abuse. The Council held an inaugural meeting in October 2012, where it identified four issue areas for action—financial exploitation, public policy and awareness, enhancing response, and advancing research—and collected white papers from issue area experts. The white papers included recommendations for improving and advancing the field of elder justice. The Council is required to make recommendations to the Secretary of HHS for the coordination of elder justice activities by relevant federal agencies and report to Congress on accomplishments, challenges, and recommendations for legislative action every 2 years. The Council met again in May 2013 to consider next steps. The Elder Justice Interagency Working Group (Working Group), an informal group designed to bring together federal officials responsible for carrying out elder justice activities, presented recommendations distilled from the white papers. The working group’s recommendations included such actions as launching an elder justice web site, developing elder justice forensic centers, a national APS data system, a national public awareness campaign, and strategies for combating financial exploitation in collaboration with industry. 42 U.S.C. § 1397k. The EJA also established an Advisory Board on Elder Abuse, Neglect, and Exploitation (Advisory Board), which has a mission, distinct from the Council, of developing innovative approaches to improving the quality of long-term care, including preventing abuse, neglect, and exploitation.28 The Advisory Board is tasked with creating short- and long-term strategic plans for the development of the elder justice field and making recommendations to the Council. Unlike the Council, which is comprised of federal agency officials who administer elder justice programs, the Advisory Board will be made up of appointed members from the general public with experience and expertise in elder abuse, neglect, and exploitation prevention, detection, treatment, intervention, and prosecution. Progress so far has included a solicitation for nominations for members, issued by HHS on July14, 2010.29 In addition, HHS and Justice officials from 10 of the 12 elder justice programs we identified said they were involved in informal or ad hoc coordination efforts with other federal programs. For example, officials representing 8 programs reported participating in joint program planning and implementation, such as assisting in the review of grant solicitations, collecting and sharing materials to support elder mistreatment prosecutions, and supporting training initiatives. Other coordination efforts program officials reported include participating in joint information sessions—such as elder abuse awareness events—jointly funding a training initiative, and sharing data and information about trends, prevention efforts, and responses to abuse, with other agencies or programs. While federal agencies have taken steps to better coordinate their efforts, less progress has been made in articulating and tracking common goals for federal elder justice activities. Program officials reported that 4 of the 12 federal programs we identified tracked elder justice outcomes in 2011 and one had conducted a program evaluation to determine effectiveness. 42 U.S.C.§1397k-1. Establishment of the Advisory Board on Elder Abuse, Neglect, and Exploitation,75 Fed. Reg. 40,838 (July 14, 2010). 1993 (GPRA)30—as recently enhanced by the GPRA Modernization Act of 201031—can also serve as leading practices for planning at lower organizational levels, such as individual programs or initiatives.32 GPRA provides federal agencies with a way to focus on results and improve agency performance by, among other things, developing strategic plans. Examples of strategic plan components include a mission statement; general goals and objectives, including outcome-oriented goals; and a description of how the goals and objectives are to be achieved. While HHS and Justice, the two agencies that oversee the 12 federal elder justice programs we identified, each have defined broad strategic goals and objectives at the department level that may impact elder justice (see table 5), and ACL has identified a strategic goal related to its elder justice responsibilities, the departments have not formally defined common goals for addressing concerns of elder abuse, neglect, and exploitation. Pub. L. No. 103-62, § 4(b), 107 Stat. 285, 287 (codified as amended at 31 U.S.C. § 1115(a)(1)). Pub. L. No. 111-352, § 3, 124 Stat. 3866, 3867-68 (2011) (codified at 31 U.S.C. § 1115(a)(1)). GAO, Veteran-Owned Small Businesses: Planning and Data System for VA’s Verification Program Need Improvement, GAO-13-95 (Washington, D.C.: January 14, 2013). Strategic goal 3: Advance the health, safety, and well-being of the American people. Objective 3.C: Improve the accessibility and quality of supportive services for people with disabilities and older adults. Administration for Community Living strategic goal 4: Ensure the rights of older people and prevent their abuse, neglect, and exploitation. Strategic goal 2: Prevent crime, protect the rights of the American people, and enforce federal law. Objective 2.2: Prevent and intervene in crimes against vulnerable populations; uphold the rights of, and improve services to, America's crime victims. Objective 3.1: Promote and strengthen relationships and strategies for the administration of justice with state, local, tribal, and international law enforcement. Justice’s fiscal year 2012 performance and accountability report also cites priority goals, intended to represent critical elements of the agency’s strategic plan, which includes a priority goal to protect those most in need of help, such as vulnerable populations including the elderly. Among the 12 federal programs, we determined that 4 tracked outcomes for potential victims of elder justice in 2011. HHS’s LTC Ombudsman Program reported the number of complaints made by residents in long-term care facilities, which include those related to elder abuse, neglect, or exploitation, that were resolved to the satisfaction of the resident. HHS’s NIA Developmental Research on Elder Mistreatment program tracked the number of scientific publications resulting from NIA funding as a measure of the program’s performance in meeting its intended purpose of supporting research on elder mistreatment. Justice’s Tribal Elder Outreach Program33 monitored outcomes related to service delivery for individual grant recipients, though these data were not aggregated for the full program. Outcomes individual grantees reported included the number of tribal victims served as a result of outreach efforts, according to Justice officials. Justice’s Abuse in Later Life Program monitored outcomes at the grantee level. Outcomes the program tracked included the number of professionals trained to respond to domestic violence and sexual assault as well as the number of individuals receiving services. For three of the eight programs that did not track elder justice outcomes in 2011, officials said they plan to do so in the future. For example, officials from two of the programs that directed all funds toward elder justice said they plan to develop and monitor outcomes. For example, HHS’s National APS Resource Center has identified program outputs, that will inform the development of outcome measures, such as a literature review of evidence-based APS programs and a related webinar, which will aim to improve the knowledge-base of participants. Similarly, the National Center on Elder Abuse,35 also funded by HHS, has identified outcomes such as elder abuse awareness and the extent to which research findings are integrated into training and practice. Similarly, formal evaluation of federal elder justice programs is limited. As we have previously reported, researchers designing formal evaluations for other programs, including one welfare program, have found that they had to fit the evaluation design to available time and resources, even when an evaluation is planned for the state rather than the federal level.36 For example, in some cases, conducting an evaluation for an entire state may be determined to be so expensive that data collection has to be limited to a portion of the state. Thus, individual program efforts may not always be feasible within the resources of many elder justice programs. Nevertheless, one program, HHS’ NIA research program, held a conference to evaluate the progress of research supported by grants 42 U.S.C. § 10603(c)(1). 42 U.S.C. § 3012(d). GAO, Designing Evaluations: 2012 Revision, GAO-12-208G (Washington, D.C.: Jan. 31, 2012); and Welfare Reform: Data Available to Assess TANF’s Progress, GAO-01-298 (Washington, D.C.: Feb. 28, 2001). awarded to research elder justice issues in 2010. Three of the 11 remaining programs also indicated they plan to conduct evaluations in the future once more work has been completed or if funding becomes available. Officials from the other 8 programs reported they have not recently conducted nor do they plan to conduct a formal evaluation that includes an assessment of elder justice activities. Program officials cited several factors that limit their ability to formally evaluate their programs, including variability of program activities and scope year by year, insufficient data due to a short period of implementation for newer programs, and limited resources, including funding devoted to and expertise in program evaluation. Given the costs associated with evaluating individual programs, developing common objectives and outcomes for HHS and Justice elder justice programs could be a first step in assessing the federal effort. As noted, coordination of these programs is just under way and HHS and Justice have not developed common objectives for the elder justice effort nor have they defined a set of common outcomes, which are necessary precursors to future performance measures, that could be used to evaluate the federal effort as a whole. Without progress on these fronts, the federal government cannot assess the effectiveness of its effort nor the efficient use of resources devoted to elder justice activities. Officials from the state aging agencies, area agencies on aging, and service providers we interviewed identified the increased demand for services in a constrained fiscal environment as a major challenge in meeting the needs of the growing older adult population. State aging agency, area agency on aging, and service provider officials also cited the need for greater awareness of elder abuse, by both the public and individuals who interact with older adults, to help prevent elder abuse or recognize its symptoms. Further, officials in all of the state aging agencies we contacted told us that elder abuse cases are increasing, especially financial exploitation cases. For example, one state official we spoke with said that, as they gave greater emphasis to such cases, they found that elder justice cases involving financial exploitation, in particular, take longer to investigate, use financial records that are difficult to obtain, and are often harder to prove than physical or emotional abuse. Elder justice activities are a small but important component of the broad range of services area agencies on aging (AAA) provided in the states we visited. Among these elder justice services were basic legal services to older adults, such as assisting with setting up wills, power of attorney, or advance directives, which can help deter financial exploitation. AAAs we visited also coordinated a number of other types of elder justice services. For example, an Arizona AAA administers a program funded through a Justice Victims of Crime Act grant to provide emergency housing for abused older adults. The same AAA uses HHS Title VII of the OAA funding for a Boys and Girls Club program to teach young people about respecting older adults. In Illinois, a Justice STOP grant helped create the Elder Law and Miscellaneous Remedies Division of the Circuit Court of Cook County which handles elder abuse cases and other instances where the victim is an older adult. Officials representing all of the nine AAAs and nine of the local providers we interviewed also helped raise awareness of elder abuse through education and training programs both for the public in general and for those members of the community who regularly interact directly with older adults. For example, in Virginia, a local service provider received a Justice grant to train criminal justice professional service providers, victims, witnesses, and anyone likely to come in contact with elders. Such individuals may not be aware of the different ways abuse can present itself in the older adult population, and other AAA officials said that education and training for these direct service providers could help them identify it and respond appropriately. The Virginia provider saw its program as helpful in determining how to address an abused older adult’s problems. In our view, in the absence of this awareness, incidents of elder abuse may go unreported or unaddressed. For the most part, APS agencies in Arizona and Virginia worked with AAAs to identify and respond to incidents of elder abuse. Although the administrative structure of the AAAs and APS agencies can vary from state to state, the basic AAA services can help bolster an older adult’s independence after an elder abuse incident. Depending on the division of responsibility for elder justice services in the state, AAAs may refer reports of elder abuse to the state’s APS program in their area. For example, in Arizona and Virginia, the AAAs turn any allegations of elder abuse over to the state APS program upon learning about it. However, in Illinois, the state contracts with service providers to conduct the entire investigation and disposition of the elder abuse cases. Once an investigation has been conducted and service needs identified, AAAs may be involved in connecting elder abuse victims to services. Despite their efforts to raise awareness of elder abuse in their communities, officials at five AAAs and four local service providers told us that more public awareness and training was needed. Moreover, officials from state aging agencies in two of the three states we visited said that the federal government should further emphasize the need for training law enforcement officers and other officials in identifying elder abuse for when they come in contact with older adults in need. An AAA official in one of these states said that training resources should also be directed to healthcare officials, because they are in a position to identify signs of elder abuse. In prior work on elder financial exploitation, experts and federal, state, and local officials focusing on this form of elder abuse told us that older adults need more information about what constitutes financial exploitation to know how to avoid it. That prior study found that each of the seven federal agencies reviewed independently produces and disseminates public information on elder financial exploitation that is tailored to its own mission, and worked together at times to increase public awareness. For example, each year, the FTC and the Postal Inspection Service collaborate on community presentations during National Consumer Protection Week. However, although the Older Americans Act calls for a coordinated federal elder justice system, which includes educating the public, the seven agencies reviewed in that prior study did not undertake these activities as part of a broader coordinated approach.37 In other work on public program effectiveness, we have concluded that agencies can use limited funding more efficiently by coordinating their activities and can strengthen their collaboration by establishing joint strategies.38 Of the nine AAA officials we spoke with for this review, five said that there is a need for a strategic, national public awareness campaign on elder justice, not limited to financial exploitation. Officials at three of these AAAs suggested that the federal government sponsor a national campaign to raise awareness that would include broadly dispersed public education announcements on the prevalence and types of elder abuse and helpful resources for those in need. The campaign would help service providers and care givers who interact with older adults or observe their behavior recognize the signs of elder abuse and report it. GAO, Elder Justice: National Strategy Needed to Effectively Combat Financial Exploitation, GAO-13-110 (Washington, D.C.: Nov. 15, 2012). GAO-06-15. During its meeting in May 2013 to consider next steps, the Elder Justice Coordinating Council moved closer to developing a coordinated federal effort to prevent and respond to elder abuse. One of the actions recommended by the Council’s Federal Interagency Working Group was the development of a national public awareness campaign. However, the Council is currently considering this recommendation and has not yet taken a final decision. Our work for this study indicates that until a broad- based public awareness campaign is established, many incidents of elder abuse may be unreported or unaddressed. Growth in the percentage of the United States population over 60 years of age and in reports of elder abuse may outstrip the public resources allocated to serve the elderly. In addition, given the range of elder justice activities and individuals served under federal programs, coordination is key to ensuring the efficient use of limited resources. Further, while federal agencies have taken some initial steps toward coordinating their elder justice activities, such as forming the Elder Justice Coordinating Council, their efforts to develop a coordinated response to elder abuse would be further supported by an assessment of the effectiveness of federal elder justice programs. Until common objectives and outcomes for federal elder justice programs are defined, agencies may be working at cross purposes. In addition, with the continuing growth in the older adult population, the absence of sufficient public awareness and education about elder abuse and the resources available to address it may slow the progress of elder justice efforts at all levels of government. 1. To provide the basis for greater consistency across states in assessing elder justice service delivery, we recommend that the Secretary of HHS, as chairman of the Elder Justice Coordinating Council, direct the Council to make it a priority to identify common objectives for the federal elder justice effort and define common outcomes. 2. To help protect older adults from all forms of abuse, we recommend that the Secretary of HHS and the Attorney General collaborate in developing a national campaign to raise awareness of the occurrence of elder abuse and provide information on how to obtain services. We provided a draft of this report to HHS and Justice, the two federal agencies that administer the 12 elder justice programs that we reviewed. HHS provided general comments that are reproduced in appendix III. Both departments provided technical comments, which we incorporated as appropriate. HHS concurred with our recommendations and agreed that federal coordination is key to ensuring the use of limited resources. Concerning the first recommendation, HHS identified the formation of the Elder Justice Coordinating Council as an effort to develop common objectives and plans for action to address elder justice issues. Further, HHS said that each of nine proposals for action that the Council now is considering has specified outcomes and that steps and strategies are being developed to implement the proposals. We agree that the Council’s activity is indicative of progress toward a coordinated federal elder justice effort and development of common objectives. We encourage the Council to think broadly in developing common objectives and outcomes that will encompass the elder justice programs of all federal agencies represented on the Council, both now and in the future. HHS said that our recommendations correctly point out a need for greater public awareness and, with regard to the second recommendation, that one of the proposals for Council action is development of a broad-based public awareness campaign. HHS also asked us to consider that improved public surveillance could help better describe the extent and patterns of abuse among older adults. In our previous report on elder abuse, we noted that, although the CDC considers elder abuse a growing public health problem, there is no ongoing surveillance of its extent similar to periodic national incidence studies of child abuse and neglect. Without periodically measuring the extent of elder abuse nationwide, it will be difficult to develop an effective national policy for its prevention as required under the OAA. In that report, we suggest that Congress consider mandating the Secretary of HHS to conduct, in coordination with the Attorney General, a periodic national study of the extent of elder abuse over time. We are sending copies of this report to the appropriate congressional committees, the Secretaries of the Departments of Health and Human Services and Justice, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This appendix discusses the methodology for surveying federal elder justice program managers to gather information for examining the potential for fragmentation, overlap, and duplication among federal elder justice programs. Using lists of existing federal programs, we identified programs government-wide that funded elder justice activities, based on program selection criteria discussed below. We confirmed the list through contact with federal agency officials. We surveyed the federal officials who manage the federal elder justice programs on our list, and analyzed the programs’ similarities and differences with regard to key elements, including objectives, activities, funding mechanisms and recipients, target populations, outcome measures and fiscal year 2011 obligations. For the purposes of this study, we defined elder justice as efforts to prevent, identify, and respond to elder abuse. To identify federal programs that provided funding for elder justice activities in fiscal year 2011, we developed an initial list of programs that met our definition of elder justice and one of three selection criteria, as shown in figure 4. The list was based on the findings of three prior inventory efforts completed by GAO and the Congressional Research Service in 2011, as well as searches of the Catalog of Federal Domestic Assistance and agency websites. To confirm the programs on our list, we contacted agency officials who managed the programs during our entrance conferences for additional program information. We also asked agency officials to add any programs that we had overlooked in developing the list. We then reassessed the list to determine how well the programs met the elder justice definition and program selection criteria. To determine how federal programs that target elder justice compare with respect to key elements, we developed a web-based survey to collect descriptive information about programs from agency officials. Because program obligations during fiscal year 2011 was one of the data elements we needed to collect, the survey was designed to collect information from programs that either have elder justice as a primary objective or one of multiple objectives. To develop the survey questions, we reviewed prior GAO reports on elder justice, on serving the aging population, and on identifying duplication and overlap in federal programs administered by multiple agencies. The survey, modeled after GAO surveys regarding potential overlap and duplication, collected information on key program elements pertaining to overlap and duplication, including objectives, activities, target populations, outcome measures, and obligations during fiscal year 2011. The survey was administered between July and September 2012. To maximize response, we sent periodic follow-up emails to all agency officials that had not responded to the survey by our deadline. The practical difficulties of conducting any survey may introduce several types of errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in the survey design, data collection, and data analysis to minimize such non-sampling errors. In designing the survey, we took steps to clarify the survey questions to ensure that questions would be correctly interpreted by respondents. For example, during its development, we pretested our Web-based survey with five selected programs administered by HHS, Justice, and the SEC. We conducted these pretests to ensure that the respondents understood the questions and could provide the answers to them and the questions could be completed in a reasonable amount of time. Following each pretest, the survey underwent additional, mostly minor, revisions, based on feedback from pretest participants. An additional source of non-sampling error can be errors in computer processing of the data and statistical analysis. All computer programs relied on for analysis of this survey data were independently verified by a second analyst for accuracy. The survey response rate was 100 percent. In addition to collecting program information, the survey allowed us to confirm, exclude, and add programs based on consultations with agency officials. As a result of our analysis of the survey data, including follow-up contact with the agency officials who managed them, we identified 12 programs that met one of the first two selection criteria and funded elder justice activities in fiscal year 2011. We analyzed whether similarities and differences in the 12 programs with respect to key program elements, such as sponsoring agency, objectives, activities, target populations and grant recipients, indicated potential overlap or duplication, and whether program officials could identify program outcomes. To augment the survey data, we collected additional information on program activities and target groups for individual activities. We reviewed program descriptions published on agency websites, in grant solicitations, and in budget justifications; and reviewed publicly available lists of grant recipients and award amounts. We then grouped activity descriptions by type of activity and specific group or subpopulation that directly benefited from the activity. For example, we determined whether education and outreach efforts were directed at the general public or a specific subpopulation of victims of elder abuse, such as those in tribal communities. We confirmed these classifications with the relevant program officials and added or corrected information as necessary. Services to End Violence Against and Abuse of Women Later in Life Program agency staff, and victim assistants to address elder abuse, neglect, and exploitation, including domestic violence, dating violence, sexual assault, and stalking, in their communities. Develop or enhance a coordinated community response to elder abuse. Provide or enhance services for victims who are 50 years of age or older. Conduct cross-training for victim service organizations, governmental agencies; criminal justice professionals; and nonprofit, nongovernmental organizations serving victims of elder abuse, neglect, and exploitation, including sexual assault, domestic violence, dating violence, and stalking, in their communities. Examples of elder justice activities: Develop multidisciplinary partnerships that include law enforcement, prosecutors, domestic violence victim services programs or nonprofits, and programs or nonprofits that serve older victims. Training for law enforcement officers, prosecutors, judges, and individuals that serve older victims and work in victims services programs. Cross training for victim services organizations, governmental agencies, courts, law enforcement agencies, and organizations working with older victims. Outreach and service delivery to older victims. Fiscal year 2011 obligations $5,033,000 for the prevention, detection, assessment, and treatment of, intervention in, investigation of, and response to elder abuse, neglect, and exploitation, including financial exploitation. Examples of elder justice activities: Training law enforcement officers, health care providers, and other professionals on how to recognize and respond to elder abuse. Support outreach and education campaigns to increase public awareness of elder abuse and prevention, including financial exploitation. Support efforts of state and local elder abuse prevention coalitions and multidisciplinary teams of agencies and organizations that work with victims. Promote the development of information and data systems, including elder abuse reporting systems, to quantify the extent of elder abuse in each state and analyze information to identify unmet service and need. Provide technical assistance to programs that provide services for victims of elder abuse and their families. Objectives: Develop comprehensive outreach strategies and foster improved culturally appropriate crime victim assistance services to address elder abuse. Augment ongoing crime victim assistance service strategies and provide special focus on elders including enhanced collaboration and coordination among victim assistance and human services, courts and law enforcement, and community development and youth outreach and mentoring programs. Link the issue of elder abuse in tribal communities with traditional cultural norms of respect and reverence for tribal elders. Examples of elder justice activities: Conduct outreach through awareness posters, service brochures, editorials and newspaper articles, radio and television ads, videos, and fact sheets. Support curriculum development, training, community teaching, and awareness efforts. Promote community-based and culturally specific crime victim assistance services and develop and distribute related protocols and toolkits. Fiscal year 2011 obligations $1,134,000 o Epidemiological study of the relationship of self-neglect and important health outcomes in a biracial population of older adults. o Study of relevant health issues, including elder mistreatment, in older Chinese adults. o Development of a mentoring program for aspiring researchers on aging topics. o Study to determine the extent and outcomes of resident-to- resident elder mistreatment in long-term care facilities. o Studies of the social and neural bases for older adults’ vulnerability to financial exploitation. Objective: Provide information, materials, and support to enhance state and local efforts to prevent and address elder mistreatment. Examples of elder justice activities: Disseminate information about elder abuse prevention, including promising practices and interventions, and provide resources to professionals and the public. Provide technical assistance, training, and consultation, to state agencies and community-based organizations. Advise on program and policy development. Objective: Support and coordinate Justice's activities in combating elder abuse, neglect, and financial exploitation, especially as they impact beneficiaries of Medicare, Medicaid, and other federal health care programs. Examples of elder justice activities: Prosecute failure of care, health care fraud, and consumer fraud cases and enforce civil rights. Promote state and local coordination through state working groups. Provide training for U.S. Attorney’s offices and Medicaid Fraud Control Units on investigating and developing failure of care cases. Provide training for nurses, prosecutors, judges, and participants of legal aid clinics on fraud and abuse cases. Fiscal year 2011 obligations 200,000 programs across the country, with a primary focus on the older population, by providing APS systems, agencies, and professionals with relevant information and support. Examples of elder justice activities: Identify evidence-based practices for APS programs and interventions and promote the evaluation of unevaluated practices that have the potential to advance and strengthen the efficiency, effectiveness, and relevance of APS work. Compile and synthesize research that informs APS programming and interventions. Provide specific and targeted technical assistance to state and local APS programs to facilitate the implementation of best practices and research findings. ACL is broadly responsible for federal efforts related to elder abuse and prevention services. In addition to administering individual programs, ACL is responsible for facilitating the development, implementation, and improvement of a coordinated, multidisciplinary elder justice system; providing federal leadership to support state efforts to carry out elder justice programs; establishing an information clearinghouse; and promoting collaborative efforts for the development and implementation of elder justice programs. 42 U.S.C. § 3011(a). Objective(s) and examples of elder justice activities Objectives: Support communities in their efforts to develop and strengthen effective law enforcement and prosecution strategies to combat violent crimes against women, including older women. Develop and strengthen victim services in cases involving violent crimes against women, including older women. Examples of elder justice activities: Provide training for law enforcement officers, judges, other court personnel, prosecutors, and domestic violence victim service providers to more effectively identify and respond to violent crimes against women. Develop and implement more effective police, court, and prosecution policies and protocols regarding violent crimes against women. Develop data collection and communication systems linking police, prosecutors, and courts to identify and track arrests and violations. Support statewide multidisciplinary efforts to coordinate the response of state law enforcement agencies, prosecutors, courts, victim services agencies, and other state agencies. Provide training for sexual assault forensic medical examiners in collecting evidence. Strengthen programs that address violence against older women and women with disabilities, including investigating and prosecuting instances of violence and targeting outreach, counseling, and other victim assistance services. Objective(s) and examples of elder justice activities Objectives: Encourage state, local, and tribal governments and courts to treat domestic violence, dating violence, sexual assault, and stalking, including instances of violence against older individuals, as serious violations of criminal law. Interpersonal Violence within Families and Among Acquaintances Prevention improve safety, access to services, and confidentiality for victims and families. Objective: Prevent interpersonal violence – including domestic violence, sexual assault, spousal and partner abuse, and elder abuse, woman battering, and acquaintance rape – within families and among acquaintances. Examples of elder justice activities: Develop uniform definitions and recommend data elements for public health surveillance of elder abuse and neglect. Provide information and education to the public on interpersonal violence to increase awareness of related public health consequences. Provide training to health care providers to identify potential victims of interpersonal violence and refer individuals to entities that provide supportive services. Objectives: Advocate for residents of nursing homes, board and care homes, assisted living facilities, and similar adult care facilities and improve residents’ care and quality of life. Resolve problems of individual residents. Examples of elder justice activities: Investigate and resolve complaints made by residents of facilities. Provide training to state and local ombudsmen. Provide consultations to LTC facility managers and staff. LTC ombudsman programs to enable them to effectively respond to residents’ complaints and represent their interests on an individual and systemic level. Strengthen the LTC Ombudsman program by developing innovative, effective approaches for states to provide services to LTC facility residents. Examples of elder justice activities: Provide technical assistance to state and local ombudsmen. Provide consultation, information, and referral for ombudsmen, residents, and families. Provide training and resources for state and local ombudsman programs. Promote public awareness on the role of ombudsmen. Identify research needs and promote research related to ombudsman programs and services. Promote cooperation between ombudsman programs and advocacy groups. Objective(s) Educate consumers and businesses about their rights and responsibilities, including providing consumers with tools needed to make informed decisions and businesses with tools needed to comply with law. Enforce consumer protection laws in federal court or administrative litigation, especially in cases alleging deceptive practices; coordinate joint law enforcement actions with state and federal partners, including criminal law enforcement; enforce injunctions and administrative orders obtained in consumer protection cases; and develop, review, and enforce consumer protection rules. Examples of activities related to elder justice Provides free information to consumers of all ages. The FTC identifies older adults as a target population for many of its consumer education efforts, including how to recognize and report identity theft and scams/frauds related to health care and financial exploitation. The Bureau of Consumer Protection has investigated frauds affecting seniors, and misrepresentations aimed at the “oldest old” and their caretakers, including misrepresentations of services provided when referring seniors to long-term care facilities. Collect consumer complaint data and share information to enable state and local law enforcement to become more effective. Collects and stores information on consumer complaints, including financial exploitation incidents such as investment fraud and identify theft. Complaints may include those reported by older individuals, though the FTC does not require complaints to include the age of the victim. Provide assistance to states, U.S. territories, and tribal governments, to provide services and activities to reduce poverty. Grants support activities that address nutrition, health, and emergency services, among others, for low- income individuals including the elderly. Reduce or eliminate dependency; achieve or maintain self-sufficiency for families; help prevent neglect, abuse, or exploitation of children and adults; prevent or reduce inappropriate institutional care; and secure admission or referral for institutional care when other forms of care are not appropriate. Grants may be used to support activities aimed at preventing neglect, abuse, or exploitation of children or adults, among others. Objective(s) Provide formula grants to states to support services that enable seniors to remain in their homes for as long as possible. Examples of activities related to elder justice Services may include case management and legal services, among others. Each state may allocate funds to area agencies on aging, which have the flexibility to use the funds to provide the services that best meet the needs of seniors in their service areas. In providing community services, providers could observe and report on potential elder abuse activity. Promote and enhance state leadership in securing and maintaining legal rights of older individuals and state capacity to: coordinate the provision of legal assistance, provide technical assistance and training; promote financial management services for older individuals, assist older individuals in understanding their rights, and improve the quality and quantity of legal services provided to older adults. The legal assistance developer can play a key role in designing and implementing the elder rights provisions of state plans to ensure older persons have access to their benefits and rights. Provide assistance for older individuals in accessing long-term care options and other community-based services; protect older individuals against direct challenges to their independence, choice, and financial security. Services are specifically targeted to older individuals with economic or social needs. Services may include assistance to ensure elder rights protections regarding transfers from LTC facilities to home and community- based care and assistance for individuals who have experienced elder abuse, including consumer fraud and financial exploitation. Provide funding to strengthen states’ legal services networks, including the development and implementation of integrating low-cost service mechanisms. Grants support legal education and assistance services and may include projects that address elder financial exploitation. The program also promotes linkages with service providers in area aging agencies, Aging and Disability Resource Centers, state long-term care ombudsmen, and APS. Provide grants to states to assist family and informal caregivers to care for loved ones at home for as long as possible. Services include dissemination of information about services and other assistance including counseling and training. Objective(s) Support the aging and legal networks to enhance the quality, cost effectiveness, and accessibility of legal assistance and elder rights programs; and support demonstration projects to expand or improve the delivery of legal assistance and elder rights protections to older individuals with social or economic needs. Examples of activities related to elder justice Supported activities include case consultation for legal professionals regarding legal problems impacting older individuals, training for aging and legal services professionals on a range of legal and elder rights issues, technical assistance to professionals that provide legal assistance to older individuals, and information dissemination regarding legal and elder rights issues. Nutrition Services (Congregate Nutrition Services, Home-Delivered Nutrition Services and Nutrition Services Incentive Program) Reduce hunger and food insecurity, promote socialization of older individuals, and promote the health and well-being of older individuals and delay health conditions through access to nutrition and other disease prevention and health promotion services. Grants to states support nutrition services including meals and nutrition education. In providing or delivering meals, service providers could observe and report on potential elder abuse activity. Promote the financial security of older individuals and enhance their choice and independence by empowering them to make decisions with respect to pensions and savings plans. Projects include assisting seniors with the administrative appeals process, locating pension plans “lost” as a result of mergers and acquisitions, and other assistance in negotiating with former employers for due compensation. Provide culturally competent health care, community-based long-term care, and related services; serve as focal points for developing and sharing technical information and expertise for organizations, communities, educational institutions, and professionals working with older Native Americans. Center activities have included assisting in developing community- based solutions to improve the quality of life and delivery of support services to the Native elderly population and providing a forum for discussions about elder abuse to help communities develop plans to reduce and control occurrences. Empower seniors to protect themselves from the economic and health-related consequences of Medicare and Medicaid fraud, error, and abuse through increased awareness and understanding of health care programs. Activities include training Medicare beneficiaries, retired professionals, and other senior citizens on how to recognize and report instances or patterns of health care fraud and abuse and complaint resolution for beneficiaries. Identify efficient, effective, and economical procedures for LTC facilities and providers to conduct background checks on a statewide basis on all prospective direct patient access employees. States conduct background checks to help meet regulations prohibiting LTC facilities and providers from employing individuals found guilty of abuse, neglect, or misappropriation of patient funds. Objective(s) Receive complaints, grievances, and requests for information from, and provide assistance to, Medicare beneficiaries. Examples of activities related to elder justice The ombudsman may receive inquiries or complaints that mention elder abuse and refer complainants to the appropriate agency or organization to address their concerns. Determine whether service providers and suppliers meet applicable requirements for participation in Medicare and/or Medicaid programs, and are incompliance with Medicaid and Medicare conditions of participation and coverage. Federal and state surveyors conduct health and safety inspections in a variety of settings, including nursing homes, home health agencies, and hospitals, to determine compliance with CMS regulations including those that address abuse and neglect of beneficiaries. Train and educate individuals in providing geriatric care for the elderly. Activities include training, development of curricula related to the treatment of health problems of the elderly, continuing education, and establishment of traineeships for advanced education students. Support the career development of physicians, nurses, social workers, psychologists, dentists, pharmacists, and health professionals as academic geriatric specialist by requiring them to provide training in clinical geriatrics, including the training of interdisciplinary teams of health professionals. Faculty teaches and develops skills in interdisciplinary education geriatrics. Establish or operate Geriatric Education Centers to provide interdisciplinary training of health professional faculty, students, and practitioners in the diagnosis, treatment and prevention of disease, disability, and other health problems of the elderly. Project activities include training and continuing education of health professionals in geriatrics and developing curricula related to the treatment of health problems of the elderly. Provide support, including fellowships, for geriatric training projects to train physicians, dentists and behavioral or mental health professionals who plan to teach geriatric medicine, geriatric dentistry, or geriatric behavioral or mental health. Physicians participate in service rotations that include day and home care programs, extended care facilities, and community care programs. Provide eligible homeowners with education and information about the unique features of a reverse mortgage and other alternatives to a reverse mortgage that homeowners may consider given their financial situation. HUD offers counseling to elderly clients who may be at risk of delinquency in paying mortgages and may be at risk of becoming victims of financial exploitation. Objective(s) Extend the length and improve the quality of independent living and prevent premature and inappropriate institutionalization of elderly and disabled non-elderly residents of federally-assisted multifamily housing. Examples of activities related to elder justice Service coordinators assess resident needs; identify and link residents to appropriate services in the community, and monitor the delivery of services. Service coordinators may also educate residents about other services and help them build informal support networks. Link public housing residents with supportive services, resident empowerment activities, and assistance in becoming economically self-sufficient. For elderly or disabled residents specifically, the objective is to help improve living conditions and enable residents to age-in-place. Service coordinators assess the needs of residents and coordinate available resources, including supportive services for elderly residents. Address the most serious tribal law enforcement needs; increase the capacity of tribal law enforcement agencies to prevent, solve, and control crime for safer communities; implement or enhance community policing strategies; and engage in strategic planning for law enforcement. Grants support law enforcement training, including community policing and computer and crime reporting training. Encourage and support research, development, and evaluation to further understanding of the causes and correlates of crime and violence, methods of crime prevention and control, and criminal justice system responses to crime and violence; and contribute to the improvement of the criminal justice system and its responses to crime, violence, and delinquency. Projects include research related to elder mistreatment as it relates to the objectives of the program, such as identifying the causes and means of preventing crime. Enhance the systemic response to crimes of domestic violence, dating violence, sexual assault, and stalking committed against American Indian and Alaska Native women and girls. Victim services provided under the program, including emergency shelter services, crisis intervention, and information and referrals, may be provided to older individuals who are victims. Provide investor protection through the prosecution of violations of federal of securities laws. Prosecuted cases may include those involving older adults as victims. In some instances, the elderly were specifically targeted. Provide information to investors about protecting their finances, funds and investments, and ways regulators can support their efforts. While most outreach activities are targeted at all investors, there are several activities, throughout the country, that are specifically focused on senior investors, such as Senior Summits, Senior Days and Senior Expos. Objective(s) Engage in outreach activities on an ad hoc basis. Examples of activities related to elder justice Past activities have included SEC’s Senior Summits which help older investors make difficult decisions about their finances and learn new ways to protect their assets as they age. Foster compliance with securities laws, detect violations, and correct compliance problems by conducting examinations of registered entities, broker-dealers, and investment advisers and companies, among others. Examinations may identify unsuitable transactions for senior investors. Provide investors with information needed to evaluate current and potential investments, make informed decisions, and avoid fraudulent schemes. In addition, provide agency staff with critical insight about emerging trends and factors shaping investor decision-making. OIEA may target certain outreach efforts to specific groups, such as seniors, members of the military, and teachers. Outreach includes providing resources to help individuals become better-educated investors, including understanding how to avoid fraud. For example, OIEA continues to support the Outsmarting Investment Fraud Campaign, designed to educate seniors about identifying potential investment fraud. Communicate money laundering or terrorist financing risks to the financial industry and facilitate the reporting of valuable information to law enforcement. FinCEN issued an advisory to financial institutions in 2011 that provided potential indicators of elder financial exploitation. In addition to the contact listed above, individuals making key contributions to this report, in all aspects of the work, were Bill Keller, Sara Edmondson, Brenna Guarneros, and Rosemary Torres Lerma. Also contributing to the report were James Bennett, Holly Dye, Melissa Jaynes, Jill Lacey, Grant Mallie, Amanda Miller, Andrew Nelson, Heddi Nieuwsma, and Craig Winslow. Elder Justice, National Strategy Needed to Effectively Combat Elder Financial Exploitation. GAO-13-110. Washington, D.C.: November 15, 2012. Elder Justice: Stronger Federal Leadership Could Enhance National Response to Elder Abuse. GAO-11-208. Washington, D.C.: March 2, 2011. Older Americans Act: More Should Be Done to Measure the Extent of Unmet Need for Services. GAO-11-237. Washington, D.C.: February 28, 2011.
|
As the percentage of older adults in the population increases, the number of older adults at risk of abuse also is growing. At the same time, constraints on public funds may limit assistance to the growing population of older adults in need. GAO was asked to review elder justice program issues. This report addresses: (1) the extent to which there is fragmentation, overlap, or duplication across the federal grant programs that support elder justice; (2) the extent to which federal programs coordinate their efforts and monitor elder justice outcomes; and (3) how state aging agencies, area agencies on aging, and service providers deliver federal elder justice services and what challenges, if any, they face in doing so. GAO reviewed relevant federal laws and regulations, identified federal elder justice programs, surveyed federal officials about program elements, reviewed program documentation, and visited agencies responsible for elder justice in Illinois, Virginia and Arizona. GAO selected states based on the percentage of the elderly in the state population, geographic dispersion, and percentage of the state's Older American Act funds devoted to elder care. In fiscal year 2011, two agencies--the Departments of Health and Human Services (HHS) and Justice (Justice) --separately administered 12 fragmented but minimally overlapping programs that directed funds toward elder justice, with low risk of duplication. Specifically, because more than one federal agency administers these programs, GAO found that these grant programs are fragmented. Further, GAO found that overlap across the 12 programs was minimal because the programs varied with respect to (1) funding mechanisms and recipients, (2) elder abuse victims targeted, (3) service providers, and (4) activities conducted. For example, a few of these programs provided formula grants to all states and most dispersed discretionary grants to a limited number of recipients. Programs that supported victims of elder abuse generally assisted all types of victims, but some also focused on certain subgroups, such as older women. Some programs that assisted service providers also targeted specific subgroups, such as judges and court personnel. In addition, elder justice programs supported a wide range of activities. For example, one HHS program provided public education to help identify and prevent elder abuse, while a Justice program trained law enforcement officers to investigate instances of elder abuse. Considering the variation across funding mechanisms and recipients, the elder abuse victims and service providers targeted by the grants, and the types of activities conducted, overlap across the 12 programs is minimal and the risk of duplication--when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries--is low. We have previously reported that coordination is key to ensuring the efficient use of limited resources to address issues that cut across more than one agency. While federal coordination is in development--for example, HHS, Justice, and other agencies recently formed the Elder Justice Coordinating Council--federal agencies have yet to articulate common objectives and outcomes as precursors to future measures for elder justice programs, which would provide a rationale for coordination. Further, few federal programs tracked elder justice outcomes in 2011 or conducted program evaluations to assess effectiveness, making it difficult to determine what impact, if any, many programs have on victims of elder abuse. Officials representing state aging agencies, area agencies on aging and service providers in the three states GAO visited identified the increased demand for elder justice services in a constrained fiscal environment as a major challenge in meeting the needs of the growing older adult population. Officials also cited the need for greater awareness of elder abuse by the public and training of direct service providers who interact with older adults on a regular basis, to help prevent elder abuse or recognize its symptoms. Five of the nine regional agency officials GAO spoke with said elder justice issues need to be elevated to national attention for the general public by a national public awareness campaign. The Elder Justice Coordinating Council is considering a recommendation to sponsor a national campaign but has not yet done so. GAO recommends that HHS take the lead in identifying common objectives and outcomes for the federal elder justice effort and that HHS and Justice develop a national elder justice public awareness campaign. HHS concurred and Justice did not comment.
|
Our analysis of initial estimates of Recovery Act spending provided by the Congressional Budget Office (CBO) suggested that about $49 billion would be outlayed to states and localities by the federal government in fiscal year 2009, which runs through September 30. However, our analysis of the latest information available on actual federal outlays reported on www.recovery.gov indicates that in the 4 months since enactment, the federal Treasury has paid out approximately $29 billion to states and localities, which is about 60 percent of payments estimated for fiscal year 2009. Although this pattern may not continue for the remaining 3-1/2 months, at present spending is slightly ahead of estimates. More than 90 percent of the $29 billion in federal outlays has been provided through the increased Federal Medical Assistance Percentage (FMAP) grant awards and the State Fiscal Stabilization Fund administered by the Department of Education. Figure 1 shows the original estimate of federal outlays to states and localities under the Recovery Act compared with actual federal outlays as reported by federal agencies on www.recovery.gov. The 16 states and the District of Columbia covered by our review account for about two-thirds of the Recovery Act funding available to states and localities. According to the Office of Management and Budget (OMB), an estimated $149 billion in Recovery Act funding will be obligated to states and localities in fiscal year 2009. Our work for this bimonthly report focused on nine federal programs, selected primarily because they have begun disbursing funds to states and include programs with significant amounts of Recovery Act funds, programs receiving significant increases in funding, and new programs. Recovery Act funding of some of these programs is intended for further disbursement to localities. Together, these nine programs are estimated to account for approximately 87 percent of federal Recovery Act outlays to state and localities in fiscal year 2009. Figure 2 shows the distribution by program of anticipated federal Recovery Act spending in fiscal year 2009 to states and localities. Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The rate at which states are reimbursed for Medicaid service expenditures is known as the FMAP, which may range from 50 percent to no more than 83 percent. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, CMS made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs, (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs, and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that states would otherwise have to use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. For the third quarter of fiscal year 2009, the increases in FMAP for the 16 states and the District of Columbia compared with the original fiscal year 2009 levels are estimated to range from 6.2 percentage points in Iowa to 12.24 percentage points in Florida, with the FMAP increase averaging almost 10 percentage points. When compared with the first two quarters of fiscal year 2009, the FMAP in the third quarter of fiscal year 2009 is estimated to have increased in 12 of the 16 states and the District. From October 2007 to May 2009, overall Medicaid enrollment in the 16 states and the District increased by 7 percent. In addition, each of the states and the District experienced an enrollment increase during this period, with the highest number of programs experiencing an increase of 5 percent to 10 percent. However, the percentage increase in enrollment varied widely ranging from just under 3 percent in California to nearly 20 percent in Colorado. (Figure 3.) Overall enrollment growth was the most rapid in early 2009—generally from January through April 2009—an enrollment trend that was mirrored in several states and the District; however, variation existed. For example, while Colorado and Mississippi experienced a nearly 5 percent increase in Medicaid enrollment during this time, Medicaid enrollment in Illinois remained relatively stable, growing at less than 1 percent. Most of the increase in overall enrollment was attributable to populations that are sensitive to economic downturns—primarily children and families Nonetheless, enrollment growth in other population groups, such as disabled individuals, also contributed to enrollment growth. With regard to the states’ receipt of the increased FMAP, all 16 states and the District had drawn down increased FMAP grant awards totaling just over $15.0 billion for the period of October 1, 2008 through June 29, 2009 which amounted to 86 percent of funds available. (See table 2.) In addition, except for the initial weeks that increased FMAP funds were available, the weekly rate at which the sample states and the District have drawn down these funds has remained relatively constant. While the increased FMAP available under the Recovery Act is for state expenditures for Medicaid services, the receipt of these funds may reduce the state share for their Medicaid programs. As such, states reported that they are using or are planning to use the funds that have become freed up as a result of increased FMAP for a variety of purposes. Most commonly, states reported that they are using or planning to use freed-up funds to cover their increased Medicaid caseload, to maintain current benefits and eligibility levels, and to help finance their respective state budgets. Several states noted that given the poor economic climate in their respective states, these funds were critical in their efforts to maintain Medicaid coverage at current levels. For example, officials from Georgia, Michigan, and Pennsylvania reported that the increased FMAP funds have allowed their respective states to maintain their Medicaid programs, which could have been subject to cuts in eligibility or services without the increased funds. Additionally, Medicaid officials in five states and the District indicated that they used the funds made available as a result of the increased FMAP to maintain program expansions or local health care reform initiatives, which in some states would have otherwise been vulnerable to program cuts. Lastly, all but Texas and the District reported they are using or planning to use the freed-up funds to help finance their state budgets. Five states—Arizona, California, Colorado, North Carolina, and Ohio—-reported using or planning to use these funds solely for this purpose. For states to qualify for the increased FMAP available under the Recovery Act, they must meet a number of requirements, including the following: States generally may not apply eligibility standards, methodologies, or procedures that are more restrictive than those in effect under their state Medicaid programs on July 1, 2008. States must comply with prompt payment requirements. States cannot deposit or credit amounts attributable (either directly or indirectly) to certain elements of the increased FMAP into any reserve or rainy-day fund of the state. States with political subdivisions—such as cities and counties—that contribute to the nonfederal share of Medicaid spending cannot require the subdivisions to pay a greater percentage of the nonfederal share than would have been required on September 30, 2008. Medicaid officials from many states and the District raised concerns about their ability to meet these requirements and, thus, maintain eligibility for the increased FMAP. While officials from several states spoke positively about CMS’s guidance related to FMAP requirements, at least nine states and the District reported they wanted CMS to provide additional guidance regarding (1) how they report daily compliance with prompt pay requirements, (2) how they report monthly on increased FMAP spending, and (3) whether certain programmatic changes would affect their eligibility for funds. For example, Medicaid officials from several states told us they were hesitant to implement minor programmatic changes, such as changes to prior authorization requirements, pregnancy verifications, or ongoing rate changes, out of concern that doing so would jeopardize their eligibility for increased FMAP. In addition, at least three states raised concerns that glitches related to new or updated information systems used to generate provider payments could affect their eligibility for these funds. Specifically, Massachusetts Medicaid officials said they are implementing a new provider payment system that will generate payments to some providers on a monthly versus daily basis and would like guidance from CMS on the availability of waivers for the prompt payment requirement. A CMS official told us that the agency is in the process of finalizing its guidance to states on reporting compliance with the prompt payment requirement of the Recovery Act, but did not know when this guidance would be publicly available. However, the official noted that, in the near term, the agency intends to issue a new Fact Sheet, which will include questions and answers on a variety of issues related to the increased FMAP. Due to the variability of state operations, funding processes, and political structures, CMS has worked with states on a case-by-case basis to discuss and resolve issues that arise. Specifically, communications between CMS and several states indicate efforts to clarify issues related to the contributions to the state share of Medicaid spending by political subdivisions or to rainy-day funds. For example, in a May 20, 2009, letter, CMS clarified that California would not fail to meet the provision of the Recovery Act related to contributions by political subdivisions if a county voluntarily used its funds to make up for a decrease in the amount the state appropriated for the Medicaid payment of wages of personal care service providers. Similarly, Mississippi clarified with CMS its understanding that it would not be permissible to deposit general fund savings resulting from the increased FMAP into the rainy-day fund in state fiscal year 2010 in order to use those funds in state fiscal year 2011. Regarding the tracking of the increased FMAP, most of the states and the District use existing processes to track the receipt of the increased FMAP separately from regular FMAP, and almost half of the states reported using existing processes to reconcile these expenditures. In addition, we reviewed the 2007 Single Audits for the states and the District and identified material weaknesses related to Medicaid, including weaknesses related to provider enrollment processes and subrecipient monitoring, for most of them. The Single Audits indicated that many states and the District planned or implemented actions to correct identified weaknesses. According to CMS officials, CMS regional offices work with states to address single audit findings related to Medicaid. The Recovery Act provides funding to the states for restoration, repair, and construction of highways and other activities allowed under the Federal-Aid Highway Surface Transportation Program and for other eligible surface transportation projects. The act requires that 30 percent of these funds be suballocated for projects in metropolitan and other areas of the state. Highway funds are apportioned to the states through federal-aid highway program mechanisms, and states must follow the requirements of the existing program, which include ensuring the project meets all environmental requirements associated with the National Environmental Policy Act (NEPA), paying a prevailing wage in accordance with federal Davis-Bacon requirements, complying with goals to ensure disadvantaged businesses are not discriminated against in the awarding of construction contracts, and using American-made iron and steel in accordance with Buy America program requirements. However, the maximum federal fund share of highway infrastructure investment projects under the Recovery Act is 100 percent, while the federal share under the existing federal-aid highway program is generally 80 percent. In March 2009, $26.7 billion was apportioned to all 50 states and the District of Columbia (District) for highway infrastructure and other eligible projects. As of June 25, 2009, $15.9 billion of the funds had been obligated for over 5,000 projects nationwide, and $9.2 billion had been obligated for nearly 2,600 projects in the 16 states and the District that are the focus of our review. Almost half of Recovery Act highway obligations have been for pavement improvements. Specifically, $7.8 billion of the $15.9 billion obligated nationwide as of June 25, 2009, is being used for projects such as reconstructing or rehabilitating deteriorated roads, including $3.6 billion for road resurfacing projects. Many state officials told us they selected a large percentage of resurfacing and other pavement improvement projects because they did not require extensive environmental clearances, were quick to design, could be quickly obligated and bid, could employ people quickly, and could be completed within 3 years. For example, Michigan began a $22 million project on Interstate 196 in Allegan County that involves resurfacing about seven miles of road. Michigan Department of Transportation officials told us they focused primarily on pavement improvements for Recovery Act projects because they could be obligated quickly and could be under construction quickly, thereby employing people this calendar year. Since many of the environmental clearances had been completed, Michigan could accelerate the construction of these projects when Recovery Act funds became available. Table 4 shows obligations by the types of road and bridge improvements being made. As table 4 shows, in addition to pavement improvements, $2.7 billion, or about 17 percent of Recovery Act funds nationally, has been obligated for pavement-widening projects. These projects provide for reconstructing and improving existing roads as well as increasing the capacity of the road to accommodate traffic, which can reduce congestion. In Florida, around 47 percent of Recovery Act funds were obligated for widening projects that increase capacity, while about 9 percent was obligated for pavement improvements such as resurfacing. As of June 25, 2009, around 10 percent of the funds apportioned nationwide have been obligated for the replacement or improvement or rehabilitation of bridges. Funding for bridge rehabilitation and replacement has been a growing national concern since the I-35 bridge collapse in Minnesota in 2007. Eleven of the states we visited had less than 10 percent of their Recovery Act funds obligated for bridge replacement and rehabilitation, while two states—New York and Pennsylvania—and the District each had more than one-quarter of their funds obligated for bridge replacement and rehabilitation. In the District, about 36 percent of its obligations are for rehabilitating bridges, including the District’s largest Recovery Act project—a bridge that has been identified as having potentially significant safety concerns. Around 2.6 percent of apportioned funds have been obligated for construction of new bridges. As of June 25, 2009, $233 million had been reimbursed nationwide by the Federal Highway Administration (FHWA) and $96.4 million had been reimbursed to the 16 states and the District. States are just beginning to get projects awarded so that contractors can begin work, and U.S. Department of Transportation officials told us that although funding has been obligated for more than 5,000 projects, it may be months before states can request reimbursement. Once contractors mobilize and begin work, states make payments to these contractors for completed work, and may request reimbursement from FHWA. FHWA told us that once funds are obligated for a project, it may take 2 or more months for a state to bid and award the work to a contractor and have work begin. According to FHWA, depending on the type of project, it can take days or years from the date of obligation for those funds to be reimbursed. For example, the North Carolina Department of Transportation (as of June 30, 2009) had advertised 65 contracts representing $335 million in Recovery Act funding. Of the 65 contracts, 55, representing $309 million, had been awarded; of these contracts, 33, representing $200 million, are under way. North Carolina has been reimbursed about $4 million of Recovery Act funding for projects as of June 25, 2009. Approximately 27 of the 65 projects, representing $70 million, are anticipated to be complete by December 1, 2009. According to state officials, because an increasing number of contractors are looking for work, bids for Recovery Act contracts have come in under estimates. State officials told us that bids for the first Recovery Act contracts were ranging from around 5 percent to 30 percent below the estimated cost. For example in California, officials reported they have had 8 to 10 bidders for each contract bid, compared with 2 to 4 bids per contract prior to the economic downturn, and that bids are generally coming in 30 percent below estimates. Arizona officials told us that contractors are willing to bid for contracts with little profit margin in order to cover overhead and put people to work, while Mississippi officials told us that material costs had decreased. Several state officials told us they expect this trend to continue until the economy substantially improves and contractors begin taking on enough other work. Funds appropriated for highway infrastructure spending must be used as required by the Recovery Act. States are required to do the following: Ensure that 50 percent of apportioned Recovery Act funds are obligated within 120 days of apportionment (before June 30, 2009) and that the remaining apportioned funds are obligated within 1 year. The 50 percent rule applies only to funds apportioned to the state and not to the 30 percent of funds required by the Recovery Act to be suballocated, primarily based on population, for metropolitan, regional, and local use. The Secretary of Transportation is to withdraw and redistribute to other states any amount that is not obligated within these time frames. Give priority to projects that can be completed within 3 years and to projects located in economically distressed areas (EDA). EDAs are defined by the Public Works and Economic Development Act of 1965, as amended. According to this act, to qualify as an EDA, an area must meet one or more of three criteria related to income and unemployment based on the most recent federal or state data. Certify that the state will maintain the level of spending for the types of transportation projects funded by the Recovery Act that it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state is required to identify the amount of funds the state plans to expend from state sources from February 17, 2009, through September 30, 2010. All states have met the first Recovery Act requirement that 50 percent of their apportioned funds are obligated within 120 days. Of the $18.7 billion nationally that is subject to this provision, 69 percent was obligated as of June 25, 2009. The percentage of funds obligated nationwide and in each of the states included in our review is shown in figure 4. The second Recovery Act requirement is to give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. Officials from almost all of the states said they considered project readiness, including the 3-year completion requirement, when making project selections, and, according to officials from just fewer than half of the states, project readiness was the single most important consideration for selecting projects. Officials from most states reported they expect all or most projects funded with Recovery Act funds to be completed within 3 years, with the exception of some larger or more complex projects that may take longer to complete. For example, Massachusetts chose to use Recovery Act funds to construct a new highway interchange in Fall River. Although this project will take longer than other projects to complete, Massachusetts officials said they selected it because it was located in the state’s only EDA. We found that due to the need to select projects and obligate funds quickly, many states first selected projects based on other factors and only later identified to what extent these projects fulfilled the EDA requirement. According to the American Association of State Highway and Transportation Officials, in December 2008, states had already identified more than 5,000 “ready-to-go” projects as possible selections for federal stimulus funding, 2 months prior to enactment of the Recovery Act. Officials from several states also told us they had selected projects prior to the enactment of the Recovery Act and that they only gave consideration to EDAs after they received EDA guidance from DOT. For instance, officials in New York said that in anticipation of the Recovery Act being enacted the state initially selected projects that were ready to go and were distributed throughout the state, without regard to their location in EDAs. Since then, the state has emphasized the need to identify and fund projects in EDAs, pushing such projects to the “head of the line.” Officials in Pennsylvania said they selected projects before federal guidance was available and that after reviewing project selections for compliance with the EDA requirement, decided to make no changes because their choices provided the greatest potential to provide jobs in an expeditious manner. States also based project selection on priorities other than EDAs. State officials we met with said they considered factors based on their own state priorities, such as geographic distribution and a project’s potential for job creation or other economic benefits. The use of state planning criteria or funding formulas to distribute federal and state highway funds was one factor that we found affected states’ implementation of the Recovery Act’s prioritization requirements. According to officials in North Carolina, for instance, the state used its statutory Equity Allocation Formula to determine how highway infrastructure investment funds would be distributed. Similarly, in Texas, state officials said they first selected highway preservation projects by allocating a specific amount of funding to each of the state’s 25 districts, where projects were identified that addressed the most pressing needs. Officials then gave priority for funding to those projects that were in EDAs. In commenting on a draft of this report, DOT agreed that states must give priority to projects located in EDAs, but said that states must balance all the Recovery Act project selection criteria when selecting projects including giving preference to activities that can be started and completed expeditiously, using funds in a manner that maximizes job creation and economic benefit, and other factors. DOT stated that the Recovery Act does not give EDA projects absolute primacy over projects not located in EDAs. However we would note that the Recovery Act contains both general directives, such as using funds in a manner that maximizes job creation and economic benefit, and specific directives which we believe must be seen as taking precedence. While we agree with DOT that there is no absolute primacy of EDA projects in the sense that they must always be started first, the specific directives in the act that apply to highway infrastructure are that priority is to be given to projects that can be completed in 3 years, and are located in EDAs. We also found some instances of states developing their own eligibility requirements using data or criteria not specified in the Public Works and Economic Development Act, as amended. According to the act, the Secretary of Commerce, not individual states, has the authority to determine the eligibility of an area that does not meet the first two criteria of the act. In each of these cases, FHWA approved the use of the states’ alternative criteria, but it is not clear on what authority FHWA approved these criteria. For example: Arizona based the identification of EDAs on home foreclosure rates and disadvantaged business enterprises—data not specified in the Public Works Act. Arizona officials said they used alternative criteria because the initial determination of economic distress based on the act’s criteria excluded three of Arizona’s largest and most populous counties, which also contain substantial areas that, according to state officials, are clearly economically distressed and include all or substantial portions of major Indian reservations and many towns and cities hit especially hard by the economic downturn. The state of Arizona, in consultation with FHWA, developed additional criteria that resulted in these three counties being classified as economically distressed. Illinois based EDA classification on increases in the number of unemployed persons and the unemployment rate, whereas the act bases this determination on how a county’s unemployment rate compares with the national average unemployment rate. According to FHWA, Illinois opted to explore other means of measuring recent economic distress because the initial determination of economic distress based on the act’s criteria was based on data not as current as information available within the state and did not appear to accurately reflect the recent economic downturn in the state. Using the criteria established by the Public Works Act, 30 of the 102 counties in Illinois were identified as not economically distressed. Illinois’s use of alternative criteria resulted in 21 counties being identified as EDAs that would not have been so classified following the act’s criteria. In commenting on a draft of this report, DOT stated that the basic approach used in Arizona and Illinois is consistent with the Public Works Act and its implementing regulations on EDAs because it makes use of flexibilities provided by the Act to more accurately reflect changing economic conditions. DOT recognizes that the Public Works Act provides the definition of EDAs that states are to follow. DOT believes, however, that it is appropriate to interpret the requirements of the Public Works Act flexibly by applying the EDA special needs criteria to areas that are experiencing unemployment or economic adjustment problems. We recognize that states may want to reflect their own particular circumstances in defining EDAs. However, the Public Works Act states that to apply the definition to a special needs area, the area must be one “that the Secretary of Commerce determines has experienced or is about to experience a special need arising from actual or threatened severe unemployment or economic adjustment problems . . .” The result of DOT’s interpretation would be to allow states to prioritize projects based on criteria that are not mentioned in the highway infrastructure investment portion of the Recovery or the Public Works Acts without the involvement of the Secretary or Department of Commerce. We plan to continue to monitor states’ implementation of the EDA requirements and interagency coordination at the federal level in future reports. Some states’ circumstances served to largely ensure compliance with the EDA requirement. For instance, all areas within the District of Columbia, which the Recovery Act treats as a state, are a single EDA, assuring that the selection of any project that can be completed within 3 years satisfies the statutory priority rules. Mississippi has 75 of 82 counties that qualify as EDAs, and Mississippi reported to FHWA that 87 percent of the funds obligated to date had been obligated for to projects located in areas classified as economically distressed. Likewise in Ohio, where 90 percent of all counties qualify as EDAs, a substantial number of Recovery Act highway projects are located in EDAs. DOT and FHWA have yet to provide clear guidance regarding how states are to implement the EDA requirement. In February 2009, FHWA published replies to questions from state transportation departments on its Recovery Act Web site stating that because states have the authority to prioritize and select federal-aid projects, it did not intend to develop or prescribe a uniform procedure for applying the Recovery Act’s priority rules. Nonetheless, FHWA provided a tool to help states identify whether projects were located in EDAs. Further, in March 2009, FHWA provided guidance to its division offices stating that FHWA would support the use of “whatever current, defensible, and reliable information is available to make the case that has made a good faith effort to consider EDAs” and directed its division offices to take appropriate action to ensure that the states gave adequate consideration to EDAs. FHWA officials we spoke with said they discussed the priority requirements with states and that the requirements were taken into consideration when approving projects. They also stated that whether a state has satisfied the EDA priority requirement will not be finally determined until the funds apportioned to the state under the Recovery Act are all obligated, which may not be completed until 2010. According to FHWA the states have until then to address future compliance with the EDA priority requirement. By 2010, however, it will be too late to take corrective action. In each of the cases where a state used its own criteria, state officials told us they did so with the approval of the FHWA division office in that state. Without clearer guidance to the states, it will be difficult to ensure that the act’s priority provision is applied consistently. Finally, the states are required to certify that they will maintain the level of state effort for programs covered by the Recovery Act. With one exception, the states have completed these certifications, but they face challenges. Maintaining a state’s level of effort can be particularly important in the highway program. We have found that the preponderance of evidence suggests that increasing federal highway funds influences states and localities to substitute federal funds for funds they otherwise would have spent on highways. In 2004, we estimated that during the 1983 through 2000 period, states used roughly half of the increases in federal highway funds to substitute for funding they would otherwise have spent from their own resources and that the rate of substitution increased during the 1990s. The federal-aid highway program creates the opportunity for substitution because states typically spend substantially more than the amount required to meet federal matching requirements. As a consequence, when federal funding increases, states are able to reduce their own highway spending and still obtain increased federal funds. As we previously reported, substitution makes it difficult to target an economic stimulus package so that it results in a dollar-for-dollar increase in infrastructure investment. Most states revised the initial certifications they submitted to DOT. As we reported in April, many states submitted explanatory certifications—such as stating that the certification was based on the “best information available at the time”—or conditional certifications, meaning that the certification was subject to conditions or assumptions, future legislative action, future revenues, or other conditions. The legal effect of such qualifications was being examined by DOT when we completed our review. On April 22, 2009, the Secretary of Transportation sent a letter to each of the nation’s governors and provided additional guidance, including that conditional and explanatory certifications were not permitted, and gave states the option of amending their certifications by May 22. Each of the 16 states and District selected for our review resubmitted their certifications. According to DOT officials, the department has concluded that the form of each certification is consistent with the additional guidance, with the exception of Texas. Texas submitted an amended certification on May 27, 2009, which contained qualifying language explaining that the Governor could not certify any expenditure of funds until the legislature passed an appropriation act. According to DOT officials, as of June 25, 2009, the status of Texas’ revised certification remains unresolved. Texas officials told us the state plans to submit a revised certification letter, removing the qualifying language. For the remaining states, while DOT has concluded that the form of the revised certifications is consistent with the additional guidance, it is currently evaluating whether the states’ method of calculating the amounts they planned to expend for the covered programs is in compliance with DOT guidance. States face drastic fiscal challenges, and most states are estimating that their fiscal year 2009 and 2010 revenue collections will be well below estimates. In the face of these challenges, some states told us that meeting the maintenance-of-effort requirements over time poses significant challenges. For example, federal and state transportation officials in Illinois told us that to meet its maintenance-of-effort requirements in the face of lower-than-expected fuel tax receipts, the state would have to use general fund or other revenues to cover any shortfall in the level of effort stated in its certification. Mississippi transportation officials are concerned about the possibility of statewide, across-the-board spending cuts in 2010. According to the Mississippi transportation department’s budget director, the agency will try to absorb any budget reductions in 2010 by reducing administrative expenses to maintain the state’s level of effort. Other states have faced challenges calculating an appropriate level of effort. For example, Georgia officials told us the state does not routinely estimate future expenditures and had to develop an alternative method for its revised certification using past expenditures to extrapolate future expenditures. In Pennsylvania, transportation officials told us that calculating the amounts for the amended certification involved making estimates over three state fiscal years and making assumptions about proposed budgets that are subject to future legislative action. As discussed earlier, states using Recovery Act funds must comply with the requirements of the federal-aid highway program, including environmental requirements, paying a prevailing wage in accordance with federal Davis-Bacon requirements, complying with goals to ensure disadvantaged business enterprises are not discriminated against in awarding construction contracts, and using American-made iron and steel in accordance with Buy America program requirements. We discussed the impact these requirements were having on project costs and time frames with officials in three states. Transportation officials in Arizona, Mississippi, and New Jersey each reported that these requirements were not causing increases in project costs and were not delaying projects from moving forward. For example, New Jersey officials stated that since these requirements apply to all highway construction using federal highway funds, not solely to Recovery Act funding, they were accustomed to complying with these requirements and had a process in place for quickly documenting compliance. In addition, these officials stated that to meet the Recovery Act’s requirements to spend the funds quickly, the state selected projects that had already completed the environmental review process or that were relatively simple projects that would have limited environmental impact. The Recovery Act created a State Fiscal Stabilization Fund (SFSF) in part to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. Stabilization funds for education distributed under the Recovery Act must be used to alleviate shortfalls in state support for education to school districts and public institutions of higher education (IHEs). The U.S. Department of Education (Education), the federal agency charged with administration and oversight of the SFSF, distributes the funds on a formula basis, with 81.8 percent of each state’s allocation designated for the education stabilization fund for local educational agencies (LEA) and public IHEs. The remaining 18.2 percent of each state’s allocation is designated for the government services fund for public safety and other government services, which may include education. Consistent with the purposes of the Recovery Act—which include, in addition to stabilizing state and local budgets, promoting economic recovery and preserving and creating jobs—the SFSF can be used by states to restore cuts to state education spending. In return for SFSF funding, a state must make several assurances, including that it will maintain state support for education at least at fiscal year 2006 levels. In order to receive SFSF funds, each state must also assure it will implement strategies to advance education reform in four specific ways as described by Education: 1. Increase teacher effectiveness and address inequities in the distribution of highly qualified teachers; 2. Establish a pre-K-through-college data system to track student progress and foster improvement; 3. Make progress toward rigorous college- and career-ready standards and high-quality assessments that are valid and reliable for all students, including students with limited English proficiency and students with disabilities; and 4. Provide targeted, intensive support and effective interventions to turn around schools identified for corrective action or restructuring. Along with these education reform assurances, additional state assurances must address federal requirements concerning accountability, transparency, reporting, and compliance with certain federal laws and regulations. Beginning in March 2009, the Department of Education issued a series of fact sheets, letters, and other guidance to states on the SFSF. Specifically, a March fact sheet, the Secretary’s April letter to Governors, and program guidance issued in April and May mention that the purposes of the SFSF include helping stabilize state and local budgets, avoiding reductions in education and other essential services, and ensuring LEAs and public IHEs have resources to “avert cuts and retain teachers and professors.” The documents also link educational progress to economic recovery and growth and identify four principles to guide the distribution and use of Recovery Act funds: (1) spend funds quickly to retain and create jobs; (2) improve student achievement through school improvement and reform; (3) ensure transparency, public reporting, and accountability; and (4) invest one-time Recovery Act funds thoughtfully to avoid unsustainable continuing commitments after the funding expires, known as the “funding cliff.” After meeting assurances to maintain state support for education at least at fiscal year 2006 levels, states are required to use the education stabilization fund to restore state support to the greater of fiscal year 2008 or 2009 levels for elementary and secondary education, public IHEs, and, if applicable, early childhood education programs. States must distribute these funds to school districts using the primary state education formula but maintain discretion in how funds are allocated to public IHEs. If, after restoring state support for education, additional funds remain, the state must allocate those funds to school districts according to the Elementary and Secondary Education Act of 1965 (ESEA), Title I, Part A funding formula. On the other hand, if a state’s education stabilization fund allocation is insufficient to restore state support for education, then a state must allocate funds in proportion to the relative shortfall in state support to public school districts and public IHEs. Education stabilization funds must be allocated to school districts and public IHEs and cannot be retained at the state level. Once education stabilization funds are awarded to school districts and public IHEs, they have considerable flexibility over how they use those funds. School districts are allowed to use education stabilization funds for any allowable purpose under ESEA, the Individuals with Disabilities Education Act (IDEA), the Adult Education and Family Literacy Act, or the Carl D. Perkins Career and Technical Education Act of 2006 (Perkins Act), subject to some prohibitions on using funds for, among other things, sports facilities and vehicles. In particular, Education’s guidance states that because allowable uses under the Impact Aid provisions of ESEA are broad, school districts have discretion to use education stabilization funds for a broad range of things, such as salaries of teachers, administrators, and support staff and purchases of textbooks, computers, and other equipment. The Recovery Act allows public IHEs to use education stabilization funds in such a way as to mitigate the need to raise tuition and fees, as well as for the modernization, renovation, and repair of facilities, subject to certain limitations. However, the Recovery Act prohibits public IHEs from using education stabilization funds for such things as increasing endowments; modernizing, renovating, or repairing sports facilities; or maintaining equipment. Education’s SFSF guidance expressly prohibits states from placing restrictions on LEAs’ use of education stabilization funds, beyond those in the law, but allows states some discretion in placing limits on how IHEs may use these funds. The SFSF provides states and school districts with additional flexibility, subject to certain conditions, to help them address fiscal challenges. For example, the Secretary of Education is granted authority to permit waivers of state maintenance-of-effort (MOE) requirements if a state certified that state education spending will not decrease as a percentage of total state revenues. Education issued guidance on the MOE requirement, including the waiver provision, on May 1, 2009. Also, the Secretary may permit a state or school district to treat education stabilization funds as nonfederal funds for the purpose of meeting MOE requirements for any program administered by Education, subject to certain conditions. Education, as of June 29, 2009, has not provided specific guidance on the process for states and school districts to apply for the Secretary’s approval. States have broad discretion over how the $8.8 billion in the SFSF government services fund are used. The Recovery Act provides that these funds must be used for public safety and other government services and that these services may include assistance for education, as well as modernization, renovation, and repairs of public schools or IHEs. On April 1, 2009, Education made at least 67 percent of each state’s SFSF funds available, subject to the receipt of an application containing state assurances, information on state levels of support for education and estimates of restoration amounts, and baseline data demonstrating state status on each of the four education reform assurances. If a state could not certify that it would meet the MOE requirement, Education required it to certify that it will meet requirements for receiving a waiver—that is, that education spending would not decrease relative to total state revenues. In determining state level of support for elementary and secondary education, Education required states to use their primary formula for distributing funds to school districts but also allowed states some flexibility in broadening this definition. For IHEs, states have some discretion in how they establish the state level of support, with the provision that they cannot include support for capital projects, research and development, or amounts paid in tuition and fees by students. In order to meet statutory requirements for states to establish their current status regarding each of the four required programmatic assurances, Education provided each state with the option of using baseline data Education had identified or providing another source of baseline data. Some of the data provided by Education was derived from self-reported data submitted annually by the states to Education as part of their Consolidated State Performance Reports (CSPR), but Education also relied on data from third parties, including the Data Quality Campaign (DQC), the National Center for Educational Achievement (NCEA), and Achieve. Education has reviewed applications as they arrive for completeness and has awarded states their funds once it determined all assurances and required information had been submitted. Education set the application deadline for July 1, 2009. On June 24, 2009, Education issued guidance to states informing them they must amend their applications if there are changes to the reported levels of state support that were used to determine maintenance of effort or to calculate restoration amounts. As of June 30, 2009, of the 16 states and the District of Columbia covered by our review, only Texas had not submitted an SFSF application. Pennsylvania recently submitted an application but had not yet received funding. The remaining 14 states and the District of Columbia had submitted applications and Education had made available to them a total of about $17 billion in initial funding. As of June 26, 2009, only 5 of these states had drawn down SFSF Recovery Act funds. In total, about 25 percent of allocated funds had been drawn down by these states. (See table 5.) Three of these states—Florida, Massachusetts, and New Jersey—said they would not meet the maintenance-of-effort requirements but would meet the eligibility requirements for a waiver and that they would apply for a waiver. Most of the states’ applications show that they plan to provide the majority of education stabilization funds to LEAs, with the remainder of funds going to IHEs. Several states and the District of Columbia estimated in their application that they would have funds remaining beyond those that would be used to restore education spending in fiscal years 2009 and 2010. These funds can be used to restore education spending in fiscal year 2011, with any amount left over to be distributed to LEAs. Table 6 shows the amount of SFSF funds received by states and how the states indicate they will divide education stabilization funds between LEAs and IHEs, based on the states’ SFSF applications. States have flexibility in how they allocate education stabilization funds among IHEs but, once they establish their state funding formula, not in how they allocate the funds among LEAs. Florida and Mississippi allocated funds among their IHEs, including universities and community colleges, using formulas based on factors such as enrollment levels. Other states allocated SFSF funds taking into consideration the budget conditions of the IHEs. For example, Georgia allocated funds to universities based on the degree to which each institution’s budget had been cut, and Illinois allocated funds among universities to provide each university a share of SFSF funds proportionate to its share of state support in fiscal year 2006. New York provided all SFSF funds slated for IHEs to community colleges to avoid cutting community college budgets. On the other hand, California planned to provide SFSF funds to its state university systems and not to community colleges because the universities had received significant budget cuts. However, California may change this plan because budget cuts at community colleges are now likely. Regarding LEAs, most states planned to allocate funds based on states’ primary funding formulae. Many states are using a state formula based on student enrollment weighted by characteristics of students and LEAs. For example, Colorado’s formula accounts for the number of students at risk while the formula used by the District of Columbia allocates funds to LEAs using weights for each student based on the relative cost of educating students with specific characteristics. For example, an official from Washington, D.C., Public Schools said a student who is an English language learner may cost more to educate than a similar student who is fluent in English. States may use the government services portion of SFSF for education but have discretion to use the funds for a variety of purposes. Officials from Florida, Illinois, New Jersey, and New York reported that their states plan to use some or most of their government services funds for educational purposes. Other states are applying the funds to public safety. For example, according to state officials, California is using the government services fund for it corrections system, and Georgia will use the funds for salaries of state troopers and staff of forensic laboratories and state prisons. Officials in many school districts told us that SFSF funds would help offset state budget cuts and would be used to maintain current levels of education funding. However, many school district officials also reported that using SFSF funds for education reforms was challenging given the other more pressing fiscal needs. Although their plans are generally not finalized, officials in many school districts we visited reported that their districts are preparing to use SFSF funds to prevent teacher layoffs, hire new teachers, and provide professional development programs. Most school districts will use the funding to help retain jobs that would have been cut without SFSF funding. For example, Miami Dade officials estimate that the stabilization funds will help them save nearly two thousand teaching positions. State and school district officials in eight states we visited (California, Colorado, Florida, Georgia, Massachusetts, Michigan, New York, and North Carolina) also reported that SFSF funding will allow their state to retain positions, including teaching positions that would have been eliminated without the funding. In the Richmond County School System in Georgia, officials noted they plan to retain positions that support its schools, such as teachers, paraprofessionals, nurses, media specialists and guidance counselors. Local officials in Mississippi reported that budget-related hiring freezes had hindered their ability to hire new staff, but because of SFSF funding, they now plan to hire. In addition, local officials in a few states told us they plan to use the funding to support teachers. For example, officials in Waterloo Community and Ottumwa Community School Districts in Iowa as well as officials from Miami-Dade County in Florida cited professional development as a potential use of funding to support teachers. Although school districts are preventing layoffs and continuing to provide educational services with the SFSF funding, most did not indicate they would use these funds to pursue educational reform. School district officials cited a number of barriers, which include budget shortfalls, lack of guidance from states, and insufficient planning time. In addition to retaining and creating jobs, school districts have considerable flexibility to use these resources over the next 2 years to advance reforms that could have long-term impact. However, a few school district officials reported that addressing reform efforts was not in their capacity when faced with teacher layoffs and deep budget cuts. In Flint, Michigan, officials reported that SFSF funds will be used to cope with budget deficits rather than to advance programs, such as early childhood education or repairing public school facilities. According to the Superintendent of Flint Community Schools, the infrastructure in Flint is deteriorating, and no new school buildings have been built in over 30 years. Flint officials said they would like to use SFSF funds for renovating buildings and other programs, but the SFSF funds are needed to maintain current education programs. Officials in many school districts we visited reported having inadequate guidance from their state on using SFSF funding, making reform efforts more difficult to pursue. School district officials in most states we visited reported they lacked adequate guidance from their state to plan and report on the use of SFSF funding. Without adequate guidance and time for planning, school district officials told us that preparing for the funds was difficult. At the time of our visits, several school districts were unaware of their funding amounts, which, officials in two school districts said, created additional challenges in planning for the 2009-2010 school year. One charter school we visited in North Carolina reported that layoffs will be required unless their state notifies them soon how much SFSF funding they will receive. State officials in North Carolina, as well as in several other states, told us they are waiting for the state legislature to pass the state budget before finalizing SFSF funding amounts for school districts. Although many IHEs had not finalized plans for using SFSF funds, the most common expected use for the funds at the IHEs we visited was to pay salaries of IHE faculty and staff. Officials at most of the IHEs we visited told us that, due to budget cuts, their institutions would have faced difficult reductions in faculty and staff if they were not receiving SFSF funds. In California and North Carolina, according to the IHE officials, the states instructed their IHEs to use the funds to cover IHE payroll expenses in certain months in spring 2009. Other IHEs expected to use SFSF funds in the future to pay salaries of certain employees during the year. For example, according to an official at Hillsborough Community College in Florida, to avoid using the nonrecurring SFSF money for recurring expenses, the IHE expects to use the funds to pay salaries of about 400 nonpermanent adjunct faculty members. Georgia Perimeter College plans to use its SFSF funds to retain 51 full-time and 17 part-time positions in its science department, and the University of Georgia plans to use the funds to retain approximately 160 full-time positions in various departments. Several IHEs we visited are considering other uses for SFSF funds. Officials at the Borough of Manhattan Community College in New York City want to use some of their SFSF funds to buy energy saving light bulbs and to make improvements in the college’s very limited space such as, by creating tutoring areas and study lounges. Northwest Mississippi Community College wants to use some of the funds to increase e-learning capacity to serve the institution’s rapidly increasing number of students. Several other IHEs plan to use some of the SFSF funds for student financial aid. For example, Hudson Valley Community College plans to use some SFSF funds to provide financial aid to 500 or more low-income students who do not qualify for federal Pell Grants or New York’s Tuition Assistance Program. Because many IHEs expect to use SFSF funds to pay salaries of current employees that they likely would not have been able to pay without the SFSF funds, IHEs officials said that SFSF funds will save jobs. Officials at several IHEs noted that this will have a positive impact on the educational environment such as, by preventing increases in class size and enabling the institutions to offer the classes that students need to graduate. In addition to preserving existing jobs, some IHEs anticipate creating jobs with SFSF funds. For example, New York IHEs we spoke with plan to use SFSF funds to hire additional staff and faculty. The University of South Florida is considering using some SFSF money to hire postdoctoral fellows to conduct scientific research, and Florida A&M University plans to use the funds to hire students for assistantships. Besides saving and creating jobs at IHEs, officials noted that SFSF monies will have an indirect impact on jobs in the community. For example, University of Mississippi officials noted that, without the SFSF funds, the university probably would have shut down ongoing capital projects building dormitories and upgrading campus heating and cooling systems, and this would have had a negative impact on construction and engineering jobs in the community. Jackson State University officials said SFSF monies will help local contractors and vendors who conduct business with the university because the funds will enable the university to recover from severe budget cuts and resume normal spending. IHE officials also noted that SFSF funds will indirectly improve employment because some faculty being paid with the funds will help unemployed workers develop new skills, including skills in fields, such as health care, that have a high demand for trained workers. State and IHE officials also believe that SFSF funds are reducing the size of tuition and fee increases. For example, Florida officials said that the 8 percent tuition increase approved by the Florida Legislature likely would have been much higher if the state had not received SFSF funds. Officials estimated that without SFSF funds, the increase in tuition necessary to compensate for decreases in state funding would have been 21 percent for students at community colleges and 35 percent for students at universities. A University of California official stated that, if the university system had not received SFSF funds and had to use fee increases to cover its budget shortfall, system-wide fees would have increased by about 24 percent instead of the approved 9.3 percent increase. U.S. Department of Education officials told us that to benchmark states’ current position on the four education reform assurances and to ease the application process, they had provided base-line data for each state and asked states to certify their acceptance of these data as part of their application for SFSF funding, or provide alternate data. In their applications to Education for SFSF funds, states were required to provide assurances that they were committed to advancing education reform in these four areas. The table below lists the four assurances and the data elements and sources Education chose to set base-line benchmarks for states. Education officials told us that these data, while not perfect, were the best available. Officials also told us that the data in the application package were preliminary, and that they plan to develop a more complete set of performance measures under each assurance for states to use or develop for the final SFSF application. While Education officials told us that the base-line data are preliminary, staff working at Achieve and the Data Quality Campaign—the two educational advocacy groups whose survey data are being used to measure two of the assurances—told us that while they believed their data set appropriate baselines, they did not believe measuring change against these baselines would be the best accountability mechanism. One staff member said that since many states were already poised to make substantial progress in implementing improved data systems in the next two years, it would not be appropriate to automatically attribute state progress in implementing the elements of a longitudinal data system to Recovery Act funds. Staff at the Data Quality Campaign said that they have told Education that it was fine to use their survey as a baseline, but that they were not comfortable with the survey becoming a primary auditing tool; doing so could change the incentives for states to respond to the survey. Moreover, staff at the Data Quality Campaign believe the more appropriate way to monitor progress is to ask states to publicly post information and analyses on a series of metrics, because by posting such information states would be verifying the capacity of their longitudinal data systems. Education officials told us that in making phase II SFSF funding available to states, Education will ask states to report on a series of performance measures for each of the four major themes for reform, which align with the education reform assurances. According to these officials, the performance measures developed for the second and final application will allow Education to fulfill three main purposes: (1) to get a status report on states’ progress in developing performance measures, (2) to put plans in place to gather the relevant information if performance measures are not available, and 3) to be able to track how states are progressing over time with respect to education reform. Education officials also said that they were aware of potential issues regarding data quality and that they plan to conduct an initial staff review and may later conduct an external review of the reliability of data used for its performance measures. The Recovery Act provides $10 billion to help local educational agencies educate disadvantaged youth by making additional funds available beyond those regularly allocated through Title I, Part A of the Elementary and Secondary Education Act (ESEA) of 1965. The Recovery Act requires these additional funds to be distributed through states to local educational agencies (LEAs) using existing federal funding formulas, which target funds based on such factors as high concentrations of students from families living in poverty. In using the funds, local educational agencies are required to comply with current statutory and regulatory requirements and must obligate 85 percent of these funds by September 30, 2010. The Department of Education is advising LEAs to use the funds in ways that will build the agencies’ long-term capacity to serve disadvantaged youth, such as through providing professional development to teachers. The Department of Education made the first half of states’ Recovery Act Title I, Part A funding available on April 1, 2009, with the 16 states and the District in our review receiving more than $3 billion of the $5 billion released to all of the states and territories. The initial state allocations and amounts drawn down as of June 26, 2009, are shown in table 8 below. As shown in table 8, as of June 26, education officials in seven states— Arizona, California, Florida, Illinois, Iowa, North Carolina, and Texas—had drawn down a portion of their Title I Recovery Act funds. As of June 26, Arizona had drawn down $16,000 in Title I Recovery Act funds. California authorized the funds to be released to LEAs on May 28, 2009 and has drawn down 80 percent of its available funds. According to local officials, both of the LEAs we visited in California received funds the week of June 1, 2009. According to U.S. Department of Education officials, they monitor state drawdowns of Recovery Act funds and will meet with state officials if they notice anything unusual. As a result of California’s large drawdown, Education officials met with California state officials to discuss their justification, especially given recent findings by the department’s Inspector General (IG) that the state lacked adequate oversight over cash management practices of school districts. According to department officials, California officials informed the department that the drawdown of Title I Recovery Act funds was in lieu of its normally scheduled drawdown of school year 2008-2009 Title I funds. As a result, officials told us the school districts were ready to use these funds quickly as they would be used under approved plans for the current school year. However, the department remains concerned over the state’s cash management system. Further, the California State Auditor has cited continued concerns about the California Department of Education’s (CDE) internal controls in both the most recent statewide Single Audit issued on May 27, 2009, and a Recovery Act funding review issued on June 24, 2009. The Single Audit identified a number of significant deficiencies or material weaknesses, including continued problems with CDE ESEA Title I cash management— specifically, that CDE routinely disburses Title I funds to districts without determining whether the LEAs need program cash at the time of the disbursement. According to California officials, the California Department of Education has developed an improvement plan to address cash management concerns. It involves LEAs reporting federal cash balances on a quarterly basis using a Web-based reporting system. According to Education officials, the first phase of this plan will be piloted beginning this summer. CDE officials stated that the pilot project includes cash management fiscal monitoring procedures to verify LEAs’ reported cash balances, ensure compliance with cash management practices, and ensure that interest earned on federal dollars is properly accounted for. Education officials told us that, given the cash management concerns, they would work with the California State Auditor and the Education Inspector General to develop a monitoring and assistance plan to ensure that California properly followed cash management requirements. According to state education officials, Illinois allowed districts to complete an application due May 29 to receive funds for summer programming use and has started to draw down funds. State officials told us that on June 2, Iowa made the first of six payments of Title I Recovery funds available to LEAs. Florida allowed LEAs to begin obligating and spending funds in late April or early May, according to a state official. In North Carolina, a state official told us that Recovery Act Title I funds have been available since May 4 for all LEAs with a current Title I application on file and that as of June 19, 31 LEAs had submitted planning budgets to the state’s Department of Public Instruction and the budgets have been approved; these LEAs, in turn, can now obligate and spend funds. As of June 26, Texas had drawn down $58,060 in Title I Recovery Act funds. Officials in Colorado and New Jersey were planning to release some Title I Recovery Act funds to a small number of their districts in June to allow them to fund summer programming and to release the rest of their funds later in the summer. In the remaining states we visited, funds will not be released to LEAs until July, August, or September. Officials in the District of Columbia, Massachusetts, Michigan, and New York said they expected to release funds to LEAs in July. Nearly all of the 16 states and the District of Columbia have required (or will require) LEAs to submit an application, a budget, or a detailed plan as a condition for receiving Recovery Act funding, but the amount of time needed to complete these processes has varied. For example, in Florida, the State Educational Agency made available an online, abbreviated application to receive funds on April 9, 2009, according to a state official. The application asked LEAs to describe how they planned to spend the funds, submit a budget, and make assurances specific to Title I. The state sent award notices to LEAs the last week of April and the first week of May 2009, allowing LEAs to begin obligating and expending funds, according to a state official. In contrast, when we spoke with Mississippi educational officials in early June, the state was still in the process of developing a new application for Title I Recovery Act funds. Mississippi planned to release the application within several weeks, provide LEAs with training and a handbook on the application, and hoped to release funds to LEAs by August 2009. Similarly, New York plans to require school districts to agree to a number of assurances regarding the use of the Title I Recovery Act funds before funds are disbursed; however, the application was in draft form as of June 17, 2009 according to a state official. Three of the states we visited (Colorado, Illinois, and New Jersey) issued early applications inviting districts to apply to receive Recovery Act funding for school year 2008-2009, such as to fund summer school programs. Other states have tied the release of funds into their annual application for regular Title I funding. For example, Georgia added seven additional questions to its consolidated application and expects to release funds on a rolling basis once LEA applications and budgets have been approved. According to officials in three of the states we visited, the state budget process is slowing the release of funds and the ability of local and state educational agencies to finalize their plans for using Title I Recovery Act funds. For example, in Pennsylvania, funds have been allocated and obligated but cannot be expended until the legislature passes a budget, according to state officials. Similarly, in Ohio, a state official told us that LEAs cannot yet spend their allocated funds because state law requires the state legislature to pass a final budget before federal funds are made available for use by state and local agencies. Education officials in Chicago told us that because the General Assembly had not yet finalized the state budget, they do not know exactly how much state funding they will receive in fiscal year 2010 and have not been able to make final decisions as to how they will spend Recovery Act Title I funds. As shown in figure 5 below, local officials most frequently reported planning to use their Title I Recovery Act funds for professional development or to fund high school programs; officials in nearly half of the districts we visited said they planned to use funds for these purposes. Approximately one-third of these local officials indicated that spending on professional development would allow them to build their long-term ong-term capacity and avoid the “funding cliff.” capacity and avoid the “funding cliff.” Nearly one-half of the districts we visited plan to use funds to serve high school students, and nearly 40 percent plan to use funds to serve preschool students—purposes that the Department of Education gave as examples of uses that are allowable under Title I and consistent with the goals of the Recovery Act. About one quarter of the districts planned to fund schools that did not previously receive Title I funding, purchase technology or software licenses, or purchase instructional materials. About 20 percent planned to make the school day or year longer, fund programs to increase parent involvement, or create or save jobs. A common theme in our discussions with state education officials was the desire to secure flexibility in using Title I Recovery Act funds. For example, of the 16 states and the District in our review, officials from 14 states expressed interest in at least one waiver. Specifically, state officials in 8 states planned to apply for at least one waiver: All of these officials planned to apply for the carryover waiver, and 3 also planned to apply for a maintenance-of-effort waiver. In addition, officials in 6 other states we visited had not yet decided whether to apply for a waiver, but all mentioned considering the carryover waiver and 3 mentioned considering the maintenance-of-effort waiver. Officials in the remaining 3 states did not plan to request a waiver. The most common waivers mentioned were carryover waivers (14 states), maintenance-of-effort waivers (6 states), and waivers for required spending for supplemental educational services or school choice transportation (3 states). Local education officials were similarly interested in securing flexibility in the uses of Title I Recovery Act Funds. Of the local officials we interviewed, more than 40 percent said they planned to request at least one waiver and approximately one quarter said they did not plan to request a waiver. The remaining officials were undecided at the time of our interviews. The particular waivers most frequently mentioned by local officials were carryover waivers, waivers of requirements for supplemental educational services (SES) funding, and maintenance-of- effort waivers. Nearly 40 percent of officials said they would request a waiver on maintenance-of-effort, over half said they would request a waiver for SES, and nearly 75 percent of these officials said they would request a carryover waiver. Of those officials planning to request a waiver for SES, officials in two school districts mentioned they did not typically need all of the funds they were required to set aside for supplemental services and wanted the flexibility to spend the funds more quickly and on purposes that would most benefit disadvantaged students. On April 1, 2009, Education released policy guidance that included principles, goals, and possible uses of funds. This guidance also included information on allocations from Education to state educational agencies and from states and the District of Columbia to their LEAs, addressed fiscal issues such as the carryover limitation, and explained the process for obtaining a waiver. Education officials told us they hosted three conference calls with state Title I directors after releasing the guidance to answer questions from state officials. Education officials also told us they have made a number of presentations around the country on using Recovery Act Title I funds and have planned a meeting for state Title I directors for this July, by which time they hope to have released additional written guidance on waivers and allowable uses of Title I Recovery Act funds. In addition to guidance from Education, LEAs report receiving various forms of guidance from their state agencies on Title I Recovery Act funding. Figure 6 shows the number of states we visited in which local educational officials in at least one district we visited told us they had received particular forms of guidance. In particular, local education officials reported participating in webinars hosted by the state educational agency (officials in eight states), participating in meetings (officials in six states), receiving state-specific written guidance (officials in seven states), obtaining information from the state educational agency Web site (officials in six states), calling or e-mailing state officials (officials in four states), participating in training sessions provided by state officials (officials in two states), and participating in conference calls with state officials (officials in four states). In at least one LEA in nine of the states we visited, local officials did not mention receiving any guidance from the state. Officials in one state and one district said that local officials are fearful of missteps with the funds. For example, officials in one LEA said they wanted more specific guidance on how the Title I Recovery Act funds can be spent in order to be sure they are doing things correctly. Given these examples and the fact that nearly half of officials in districts we visited reported wanting more guidance on allowable uses of Title I funds that meet the priorities of the Recovery Act, it seems likely that the lack of guidance may be slowing LEA’s planning processes. When asked about guidance they would particularly like to receive, state education officials most frequently said they wanted more information regarding guidance on waivers (nine states), reporting requirements (five states), and how to define jobs created or saved (three states). Local officials most frequently said they wanted guidance on reporting requirements and on allowable uses of Title I funds that would be in accordance with the priorities of the Recovery Act. They also reported wanting more guidance on waivers, flexibility in spending, and the “supplement-supplant” provision. The Recovery Act provided supplemental funding for programs authorized by Parts B and C of the Individuals with Disabilities Education Act (IDEA), the major federal statute that supports the provisions of early intervention and special education and related services for infants, toddlers, children, and youth with disabilities. Part B funds programs that ensure preschool and school-aged children with disabilities have access to a free and appropriate public education and Part C funds programs that provide early intervention and related services for infants and toddlers with disabilities—or at risk of developing a disability—and their families. IDEA formula grants and Recovery Act funds are allocated to states through 3 grants—Part B grants to states (for school-age children), Part B preschool grants (section 619), and Part C grants for infants and families. The U.S. Department of Education made the first half of states’ Recovery Act IDEA allocations to state agencies on April 1, 2009. As of June 26, 2009, of the sixteen states and District of Columbia that we visited, only seven states had drawn down IDEA Recovery Act funds. In total, just over eight percent of allocated funds had been drawn down in these states. (See table 9.) Most states that we visited are requiring LEAs to submit an application to receive the IDEA Part B Recovery Act funding. State and local officials report receiving general guidance from the U.S. Department of Education (Education) but additional clarifications are needed in key areas. In April 2009, Education released policy guidance describing principles and goals of IDEA Recovery Act funds, and written guidance with information on the timing of allocations of funds to states, indirect costs, waivers, and authorized uses of IDEA Recovery Act funds. According to Education officials, Education has also provided assistance and guidance to states and school districts in a variety of other ways, including conference calls with state agencies administering Parts B and C, presentations at conferences, and webinars on specific issues such as IDEA maintenance-of-effort requirements. While Education officials provided guidance with examples of allowable uses of Recovery Act IDEA funds on April 24, states and LEAs indicate the need for further guidance in this area. For example, several states and LEAs report needing clearer guidance on allowable uses, including construction costs, and Education officials said they have heard questions about allowable uses for buses for students with disabilities. Education officials said that they are working on a more detailed document on innovative strategies for increasing student academic achievement and avoiding funding commitments that will be unsustainable after the Recovery Act funding expires. Several states reported offering various forms of guidance to LEAs, including holding webinars, direct communication, and providing written guidance on potential uses of Recovery Act IDEA funding. At the time of our site visits neither Education nor the U.S. Office of Management and Budget (OMB) had issued final guidance on Recovery Act reporting. Many state officials told us that it will be difficult to plan how they will report the impact of Recovery Act funding until they receive further guidance from OMB or Education. Education is planning to supplement the guidance OMB provides, to help state agencies report the proper data. In particular, Education officials noted that draft OMB guidance on recipient reporting would require some additional Education guidance to clarify issues for recipients of formula grants, such as the IDEA grants. Various state and local officials had concerns about whether their LEAs would be able to exercise the flexibility allowed under IDEA Part B’s maintenance-of-effort requirements. Generally, in any fiscal year that an LEA’s IDEA, Part B section 611 or grants to states allocation exceeds the amount the LEA received in the previous year, the LEA may reduce its local spending on disabled students by up to 50 percent of the amount of the increase, as long as the LEA (1) uses those freed-up funds for activities that could be supported under the Elementary and Secondary Education Act of 1965, (2) meets the requirements of the IDEA, including the performance targets in its state’s performance plan, and (3) can provide a free appropriate public education. Pennsylvania officials said that this rule has been a source of confusion for LEAs in their state, and state officials said they have discussed it in great detail in webinars, conferences, and other communication with LEAs. Education officials said that in developing Education’s guidelines, in addition to reviewing and interpreting the statutes, they have met with state and local educational agencies and interest groups who have raised concerns. Education officials told us that some interest groups have asked them to reconsider the requirement that LEAs meet requirements of the IDEA, including performance targets in state performance plans in order to qualify for the MOE flexibility, but agency officials believe this requirement was statutorily mandated. Another concern involves LEAs that have been determined to have significant disproportionality based on race and ethnicity, because these districts are required to set aside 15 percent of their total IDEA, Part B funds, including Recovery Act IDEA, Part B funds, for comprehensive early intervention services. This limits their ability to exercise MOE flexibility. According to Education officials, interest groups have asked Education to reconsider its interpretation of this IDEA provision. States and LEAs plan to spend IDEA Part B Recovery Act funding for a variety of services and initiatives. Most LEAs planned to offer professional development activities and several noted that such activities could avoid unsustainable funding commitments after Recovery Act funds expire. LEA officials in the District of Columbia and Philadelphia said that their goal with their IDEA Part B Recovery Act expenditures is to expand their districts’ ability to serve more students with disabilities, which would mean that the LEAs would receive IDEA funds for serving students with disabilities who are currently served by going to schools outside the LEAs. Other examples of areas in which LEAs plan to spend Recovery Act funds include: acquiring and improving the use of assistive technologies; improving transitions for students with disabilities, from preschool to K- 12, and from school to jobs; and increasing capacity to collect and utilize data. States may use IDEA, Part C Recovery Act funds for any allowable purpose under IDEA, Part C, including the direct provision of early intervention services to infants and toddlers with disabilities and their families, and implementing a statewide, comprehensive, coordinated, multidisciplinary, interagency system to provide early intervention services. At the time of our interview, Illinois Department of Human Services officials said that the department had already received and expended its initial allocation of IDEA, Part C Recovery Act funds and that the funds had been used to avert a 7 to 8 percent cut in its caseload. Pennsylvania officials plan to spend most of the state’s IDEA, Part C Recovery Act funds on basic services, but they also plan to spend $1 million in IDEA, Part C Recovery Act funds for an early childhood integrated data system. In Arizona, officials told us that these services are provided by entities that contract with the Arizona Department of Economic Security (DES). DES officials maintain that these IDEA Part C Recovery Act funds will be used to address funding shortfalls created by an increasing caseload without a commensurate increase in base federal or state funding for Part C services. In Colorado, state officials said that the IDEA Part C Recovery Act funds would generally go to contracts with community centered boards and some universities that provide professional and paraprofessional development as well as technology and services. The Recovery Act provides an additional $1.2 billion in funds nationwide for the Workforce Investment Act (WIA) Youth program to facilitate the employment and training of youth. The WIA Youth program is designed to provide low-income in-school and out-of-school youth age 14 to 21, who have additional barriers to success, with services that lead to educational achievement and successful employment, among other goals. The Recovery Act extended eligibility through age 24 for youth receiving services funded by the act. In addition, the Recovery Act provided that, of the WIA Youth performance measures, only the work readiness measure is required to assess the effectiveness of summer-only employment for youth served with Recovery Act funds. Within the parameters set forth in federal agency guidance, local areas may determine the methodology for measuring work readiness gains. The WIA Youth program is administered by the Department of Labor (Labor), and funds are distributed to states based on a statutory formula; states, in turn, distribute at least 85 percent of the funds to local areas, reserving up to 15 percent for statewide activities. The local areas, through their local workforce investment boards, have flexibility to decide how they will use these funds to provide required services. In the conference report accompanying the bill that became the Recovery Act, the conferees stated they were particularly interested in states using these funds to create summer employment opportunities for youth. While the WIA Youth program requires a summer employment component to be included in its year-round program, Labor has issued guidance indicating that local areas have the program design flexibility to implement stand- alone summer youth employment activities with Recovery Act funds. Local areas may design summer employment opportunities to include any set of allowable WIA Youth activities—such as tutoring and study skills training, occupational skills training, and supportive services—as long as it also includes a work experience component. Labor has also encouraged states and local areas to develop work experiences that introduce youth to opportunities in “green” educational and career pathways. Work experience may be provided at public sector, private sector, or nonprofit work sites. The work sites must meet safety guidelines, as well as federal and state wage laws. For this report, we focused on the WIA Youth program in 13 of our 16 states (all except Arizona, Colorado, and Iowa) and the District of Columbia (District). The 13 states and the District received nearly two- thirds of the Recovery Act WIA Youth funds allotted by Labor. In turn, the 13 states have allocated at least 85 percent of these funds to their local workforce areas, as shown in table 10. As allowed, the 13 states generally reserved 15 percent of the Recovery Act WIA Youth funds for statewide uses, although Florida and New Jersey instead allocated their entire allotments to local workforce areas. As of June 25, 2009, about 6 percent of Recovery Act WIA Youth funds had been drawn down nationwide, according to Department of Labor data. Draw downs represent cash transactions: funds drawn down by states and localities to pay their bills. Among the 13 states and the District of Columbia, the percent drawn down generally ranged from zero for the District to 10 percent for Ohio. However, one state—Mississippi—had drawn down 39 percent of its funds. Draw downs do not provide a complete picture of the extent to which states and localities have used Recovery Act WIA Youth funds to provide services since payment for services can occur after funds are obligated and services are provided. The Department of Labor receives quarterly reports from states on their WIA Youth expenditures for services that have been provided, but there is a time lag before these data become available. For example, states’ reports for the quarter ending June 30 are due to Labor 45 days after the end of the quarter, or August 15, and Labor then reviews the data before releasing them. Consistent with congressional intent that a substantial portion of these funds be used for summer youth employment activities, our states generally plan to use these funds to increase the number of youth served through summer activities. For example, Michigan anticipates serving about 25,000 youth in the summer of 2009, compared with about 4,000 youth served with WIA funds in the summer of 2008. Illinois plans to spend about $50 million of its $62 million Recovery Act Youth allotment on youth employment activities in the summer of 2009 and has set a target of serving about 15,000 youth through these activities. Texas set a target of spending 60 percent of Recovery Act WIA Youth funds allocated to local areas on summer employment activities and serving about 14,400 youth in the summer of 2009 (compared with 918 youth actually served in the summer of 2008 with WIA funds). In contrast to these states, the District plans to use its Recovery Act WIA Youth funds on its year-round WIA Youth program. District officials told us that, before receiving the Recovery Act funds, they had already allocated $45 million for the district’s locally funded 2009 summer youth employment program, which they said is the second-largest summer youth employment program in the nation, serving about 23,000 youth. Several states, including Massachusetts, Ohio, Pennsylvania, and Texas, have required their local workforce areas to spend from 50 percent to 70 percent of their Recovery Act WIA Youth funds by September or October 2009. For example, Ohio requires local areas to spend at least 70 percent of these funds by October 31, 2009, and 90 percent of funds by January 31, 2010, or risk having funds recaptured by the state. Massachusetts requires local areas to spend at least 60 percent of their funds by September 30, 2009. States and local areas we visited varied in the approaches they planned to use in providing summer youth employment activities. While public sector work sites were frequently mentioned, so, too, were private sector and nonprofit organizations. Across the spectrum of work sites, work activities ranged widely. Local areas were varied in the role that academic and occupational skills training is playing in the summer activities and in the extent to which contracted providers will administer the summer activities. Type of work experience. Planned work sites for the Recovery Act-funded summer youth activities varied widely across the local areas we visited and included public sector, private sector, and nonprofit organizations. Most local areas expected at least some public sector jobs, and in some areas the majority of the work sites are expected to be in the public sector. These sites often included local government offices; public parks, recreation centers, and camps; and public schools and community colleges, public libraries, and animal shelters. Local areas in several states were planning to place youth in private sector work sites, as well, including supermarkets, pharmacies, health care institutions, and private learning centers. Officials in two local areas we visited expected the majority of their work sites to be in the private sector. In addition, at least one local area in nearly all of the states we visited expects to make use of nonprofit work sites, including community action agencies, boys and girls clubs, and the YMCA. Across the different types of work sites, the specific work activities planned for the youth ranged from clerical work, grounds keeping, animal care, and kitchen support to customer service and serving as camp counselors or radiology technicians’ assistants. Labor encouraged states to develop work experiences in “green” jobs, and officials reported that green jobs were available in nearly all local areas we visited. The jobs they cited included landscape maintenance, recycling, and green construction, and an automotive fuel technology project at a university, as well as jobs in energy efficiency and weatherization. However, officials told us they were not always clear what constituted a green job. For example, officials in Pennsylvania’s South Central local area questioned whether a youth working in a plastics factory that makes parts for a windmill is working in a “green” job. Labor has provided some discussion of green jobs in its guidance letters to states on Recovery Act funds. For example, Labor’s March 18, 2009, guidance letter highlights areas within the energy efficiency and renewable energy industries that will receive large Recovery Act investments, such as energy-efficiency home retrofitting and biofuel development, and also provides examples of occupations that could be impacted by “green” technologies, including power plant operators, electrical engineers, and roofers and construction managers. Labor officials told us that their reporting requirements for Recovery Act funds do not include any tracking of green jobs. Role of academic and occupational skills training. While not all local areas had completed their plans for the summer activities at the time of our review, in about half of the states at least some local areas were planning to provide academic or occupational skills training along with work experience. For example, Buffalo, New York, plans several projects that will combine green jobs with academic training, as well as weatherization and construction skills. In one such program, youth will work to earn their General Equivalency Diplomas (GED) while also learning “green” construction skills. Participants will earn $7.25 an hour for their work experience and $3 an hour while working on their GEDs. Another of Buffalo’s projects will help youth who are at-risk of dropping out of school by providing them with an opportunity to recover the high school credits they need for graduation while also taking part in work experience. Even when local areas are focusing most of their efforts on work experience, many are also planning to provide work readiness training as part of an initial orientation to the summer activities, but the nature of the work readiness training varied widely. In Mercer County, New Jersey, for example, youth will be given a short workshop on interviewing skills prior to a job fair. In addition to employment, youth age 14 to 17 will receive 21 hours of job readiness training, and those age 18 to 24 will receive 28 hours of job readiness training. In another New Jersey example, youth in Camden County will receive 8 hours of life skills training using a standard curriculum, followed by financial literacy training based on curricula developed for youth by the Federal Deposit Insurance Corporation. Other local areas we visited also plan to provide financial literacy training as part of their orientation. Ohio’s Franklin and Montgomery Counties, for example, have arranged for a local bank to help participating youth set up bank accounts, into which their paychecks will be automatically deposited. Youth will receive debit cards to access the account and will receive basic financial counseling. Administration of summer employment activities. Many local areas are using contracted providers to operate key aspects of the WIA summer youth employment activities, such as recruiting youth and work sites and administering payroll. In some cases, officials report they have been able to extend existing contracts with their WIA year-round program service providers to cover the stand-alone summer employment activities. In other cases, they have conducted new competitions, in part, because they needed additional contractors to cover the expansion of services. All thirteen of our states applied for and received a waiver from Labor relating to procurement requirements for youth summer employment providers. The waivers allow local areas to expand existing competitively procured contracts or conduct an expedited, limited competition to select service providers. Labor approved 10 of these waivers in April or May 2009, and the other 3 in June 2009. While using contracted providers to operate the program was more common, in a few states at least some local areas were operating the entire program in-house. In New Jersey, for example, the local areas we visited are relying mostly on internal staff to carry out program responsibilities; however, one area plans to use contracted providers for some specific roles. In Ohio, two of the four local areas we visited had decided to operate the program in-house. Officials in one of the local areas in Ohio told us they made the decision for two reasons—they wanted to be able to exercise greater control over the program and they were seeking to avert staff layoffs due to funding cuts in other programs. State and local officials reported challenges in implementing their stand- alone summer youth employment activities that generally reflected three key themes—tight time frames for implementing the program, lack of staffing capacity to meet the expanding needs, and difficulty in determining and documenting youth eligibility. Tight time frames. Many state and local officials commented that the biggest challenge in implementing the program was the limited time frame they had for making the program operational. Once the Recovery Act was passed, states and local areas had only about 4 months to get their new summer youth employment activities up and running—a process that officials told us would normally begin many months earlier. And local areas often lacked recent experience in operating such a stand-alone program. In implementing the year-round service requirements of WIA (in which summer employment is a component rather than a stand-alone program), many states and local areas had greatly reduced their summer youth employment programs and no longer offered a stand-alone summer program—or they had found other funding sources, such as state, local, or foundation funds, to cover it. Unlike WIA, its predecessor, the Job Training Partnership Act, required local areas to provide a stand-alone summer youth employment program. The local areas we reviewed represented a mix of experiences. Those without recent experience had to build the program from the ground up. These areas had to quickly confront many basic decisions—how to structure the program, how to recruit work sites and participants, whether to use contracted providers (and for what functions) or whether to administer the program in house. Other areas, however, had well-developed summer youth employment programs. These areas already had some of these basic structures in place, but often still found it challenging to quickly expand their existing programs. Staffing capacity. Across the local areas we visited, many officials told us staff were challenged to address the needs of the growing number of youth they needed to serve. In some cases, states had been downsizing or did not have the flexibility to hire additional staff due to hiring freezes and budget cuts. For example, Essex County, New Jersey, operating with two full-time staff, said the inability to hire additional staff posed challenges for recruiting youth and monitoring the program. In the local areas we visited in Ohio, the expected increases in enrollments were leaving local areas’ staff stretched thin. To address this challenge, some counties were reassigning employees from other programs to work on WIA summer youth employment activities. One county had arranged for additional staff to monitor the summer program by using a temporary placement agency. Similarly, Chicago officials said that, despite having had experience in implementing a stand-alone summer program, they found implementing the WIA summer youth employment activities challenging because, in order to adequately ramp up their programs and prepare for implementation, they had to borrow staff from other sections who do not typically work on the WIA Youth program. Determining and documenting youth eligibility. Several states and local areas commented that it was challenging to determine youth applicants’ eligibility and to obtain supporting documentation, especially for the increased number of youth they are planning to serve. New Jersey officials told us that the youth targeted for the program generally have difficulty providing the kinds of documents required in order to prove WIA Youth program eligibility. For example, to determine that youth meet the eligibility requirements, local officials in New Jersey require documentation that includes public assistance identification cards to support total household income, birth certificates for proof of citizenship, Social Security numbers, and documentation of selective service registration for males age 18 and over. Officials in a few states also expressed concern that the income eligibility standards were more restrictive than for other programs, particularly those operated using state funds, and that the standards may be excluding a significant number of youth who need the services. For example, officials in Philadelphia reported that some of their youth applicants whose parents had recently lost their jobs were not eligible for the program because eligibility was based on income earned during the period just prior to dislocation. With regard to program oversight, all 13 of our states and the District reported they had the capacity to track and report on Recovery Act funded WIA Youth expenditures separately from those not funded by the Recovery Act. The states also reported plans to use a variety of procedures to monitor local areas’ summer youth employment activities, such as risk assessments, on-site monitoring, and periodic meetings with local program directors. For example, Ohio state officials sent a survey to the local workforce areas in May 2009 to help identify local areas with greater risk due to factors such as critical timing issues, larger program scope, or substantial changes from past programs, and the state planned to initially focus its attention on these local areas. Massachusetts state officials said they planned to conduct on-site visits to each local workforce area at least twice during the summer and that the state’s monitoring efforts would include file reviews of information pertaining to topics such as eligibility, standard operating procedures, contracts, statements of work, and subrecipient monitoring. Michigan state officials said they planned to hold monthly meetings with all local program directors to encourage the reporting of consistent information and that their on-site monitoring would focus especially on private sector components of the program, such as private sector worksites. Department of Labor officials said that they have efforts underway to understand the experiences of those operating summer youth activities through regular interaction with state and local service providers, monitoring, identifying any issues, and providing assistance to address the issues. For example, they said that Labor’s regional offices have begun visiting local areas to monitor and gather information and will be visiting about two areas in each of their states this summer. Beginning the week of June 29, each of the six regional offices will begin providing weekly narrative reports on the Recovery Act summer youth employment activities from at least two local areas each week. To assess the effects of the summer youth employment activities, states will be required to report a work readiness attainment rate—defined as the percentage of participants in summer employment who attain a work readiness skill goal. Under Department of Labor guidelines, states and local areas are permitted to determine the specific assessment tools and the methodology they use to determine improvements in work readiness, but it must be measured at the beginning and completion of the summer experience. Not all areas had finalized their plans for assessing work readiness at the time of our visits but were considering various pre- and post-test options. For example, officials in Mississippi plan to do a written pre- and post-test but will also assess youth at the midpoint through an interview with an employment adviser. All three areas we visited in Florida plan to supplement the pre- and post-tests with feedback from businesses and work site supervisors. To monitor and report on progress made in implementing the program, Labor has instituted new reporting requirements on youth participating in Recovery Act-funded activities. Under WIA, states have been required to report quarterly to Labor on aggregate counts of youth participants, activities, and outcomes. However, since these reports are not submitted in time for Labor to comply with Recovery Act requirements to make information readily available to the public, states will be required, beginning on July 15, to submit a supplemental monthly report on youth. In this supplemental report, states will submit aggregate counts of all Recovery Act youth participants, including the characteristics of participants, the number of participants in summer employment, services received, attainment of a work readiness skill, and completion of summer youth employment. In addition to Labor’s reporting requirements, a few states were developing plans for additional assessments of the program. Georgia officials, for example, reported that they are considering tracking whether youth return to school or obtain full-time employment after the summer program is over. Similarly, officials in Illinois are currently designing a tracking system that will allow them to assess the long-term impacts of the program, including job placement and job retention of participants. The Recovery Act requires the U.S. Department of Housing and Urban Development (HUD) to allocate $3 billion through the Public Housing Capital Fund to public housing agencies using the same formula for amounts made available in fiscal year 2008. HUD allocated Capital Fund formula dollars to public housing agencies shortly after passage of the Recovery Act and, after entering into agreements with over 3,100 public housing agencies, obligated these funds to public housing agencies on March 18, 2009. Although HUD has allocated and obligated almost $3 billion in formula capital grants to 3,123 public housing agencies, and 1,483 agencies have begun obligating relatively less, little funding has been drawn down by housing agencies. Specifically, as of June 20, 2009, $466 million, or 16 percent, of the funds allocated by HUD to the housing agencies has actually been obligated by the housing agencies, and $32 million, or 1.1 percent, has been drawn down (see figure 7). For this report, we visited 47 public housing agencies in the 16 states and the District of Columbia, which had received formula grant awards totaling $531 million. These housing agencies have identified projects and are just beginning to obligate and draw down Recovery Act funds for project expenses. As of June 20, 2009, these public housing agencies had obligated almost $66 million, or about 12 percent of their $531 million allocation, and had drawn down $2.6 million, or 0.5 percent of the $531 million. Thirty of the 47 agencies had obligated funds (including 3 small agencies and 1 medium agency that had obligated 100 percent of their funds), indicating that contracts had been awarded and signed and that work was beginning, of which 20 had drawn down funds (see figure 8). Several of the 17 public housing agencies we spoke to that had neither obligated nor drawn down any funds stated that they had not done so because they were awaiting approval from HUD on their plans for using Recovery Act funds, or they were still soliciting bids and finalizing contracts. Others were developing project plans or completing environmental reviews. In addition, some public housing agency officials stated that their status as a “troubled performer”—based on HUD’s Public Housing Assessment System (PHAS)—meant they faced more oversight and monitoring from HUD, which was preventing them from obligating the Recovery Act funds as quickly as they would like. However, many of these 17 agencies expected to begin awarding contracts, obligating funds, and working on projects by July 2009. This timeline is in line with HUD headquarters officials’ expectations that activity involving obligating Recovery Act funds will increase substantially during the next quarter (July to September 2009). For the 47 housing agencies that we visited, officials indicated that they were planning to use their Recovery Act funds for various types of activities, ranging from parking lot repaving to complete rehabilitation of multi-unit structures. Among the most common Recovery Act project types mentioned by public housing agency officials were roof and window replacements; heating, ventilation, and air conditioning (HVAC) system upgrades or replacements; and interior rehabilitation work, such as kitchen or bathroom renovations and flooring or carpet replacements. For example, Athens Housing Authority in Georgia plans to replace water heaters and kitchen cabinets at 23 scattered sites (see figure 9). According to the public housing agencies we visited, more than 15,000 units will be rehabilitated, including more than 1,500 vacant units. Relatively small-scale projects were already underway or had been completed, such as the 10 bathroom remodels and 105 window replacements that Ferris Housing Authority in Texas had finished. In contrast, some major projects requiring planning and design work had yet to begin. In fact, some public housing agencies avoided large, complex projects because they believed the projects would take too long. However, some of the large public housing agencies are funding major activities, such as demolishing a public housing structure, constructing new structures, or completely renovating hundreds of units across many properties. For example, Philadelphia Housing Authority plans to spend over $29 million to rehabilitate 300 vacant units at various sites—one of which is shown in figure 10—and another $14.6 million to completely reconfigure a 71-unit mid-rise building into a 53-unit building with new community spaces, elevators, and energy-efficient electrical and mechanical systems. Cuyahoga Metropolitan Housing Authority in Ohio is using $12 million of Recovery Act funds to pay for part of a $65 million redevelopment initiative that involves demolishing existing structures and building new structures. HUD has informed housing agencies that they may use the Recovery Act funds for demolition and construction of new units, provided that they can meet the Act’s obligation and expenditure deadlines. Prioritization: The Recovery Act requires public housing agencies to give priority to projects involving the rehabilitation of vacant units, projects already underway or on the agency’s latest 5-year plan, and projects that could be awarded based on bids within 120 days of the Recovery Act funds becoming available. Public housing agency officials we spoke to generally prioritized projects that were on their 5-year plan, that could be initiated quickly, and that were, in their judgment, the most critical projects to be completed. Only a few of the largest public housing agencies we visited stated that they had relatively large numbers of vacant units they were going to rehabilitate. More than 1,200 of the over 1,500 vacant units that agencies we visited had slated for rehabilitation using Recovery Act funds were identified by just five public housing agencies: Chicago Housing Authority, Philadelphia Housing Authority, San Francisco Housing Authority, Cuyahoga Metropolitan Housing Authority in Ohio, and Newark Housing Authority. However, for some agencies facing relatively few vacancies, rehabilitating vacant units was not the highest priority in selecting projects. Instead, they focused on meeting other Recovery Act priorities, such as selecting projects already underway or selecting projects for which contracts could be awarded within 120 days. An additional priority for public housing agencies in selecting projects was finding ways to improve energy efficiency in their buildings. Some are seeking to accomplish this by making exterior improvements, such as replacing roofs, siding, or windows, while others will be replacing appliances or HVAC equipment with more energy-efficient models. For example, Rahway Housing Authority in New Jersey is in the process of replacing siding on some of its buildings to increase the energy efficiency (see figure 11). Another example of an exterior improvement is from the District of Columbia Housing Authority. Agency officials told us they used Recovery Act funds to install solar panels on top of one of the residential buildings as part of its effort to “green retrofit” all the housing units in the complex. These panels will help heat water for the building. Barriers and Challenges: Public housing agency officials noted a few barriers and challenges they had confronted or anticipated related to Recovery Act funds and projects, but in most cases no single concern was widely shared among the officials with whom we spoke. In a few cases, public housing agencies mentioned that they had experienced delays in accessing their funds in HUD’s Electronic Line of Credit Control System (ELOCCS) due to problems with or confusion about the requirement to obtain a Data Universal Numbering System (DUNS) number and to register in the Central Contractor Registration (CCR) system. For example, two housing agencies had trouble registering because their actual location (city or county) was different from the information associated with the DUNS number in the system. However, once agencies were properly registered, they did not anticipate any problems using the system. According to HUD officials, registering in the CCR has been a substantial problem nationwide, despite efforts by HUD to communicate these requirements to public housing agencies. HUD officials estimated that about 380 public housing agencies (out of approximately 3,100) had not properly registered in CCR and were therefore unable to obligate or draw down Recovery Act funds as of June 15, 2009. HUD officials are working with these agencies to resolve the problems as quickly as possible. Another challenge raised by public housing agency officials and HUD officials was the “Buy American” provision of the Recovery Act. Several officials noted that depending on how this provision was interpreted, it could pose a barrier to getting contracts in place and completing projects. For example, HUD officials noted that agencies may have difficulty in finding an adequate selection of goods and materials for improving energy efficiency that meet the “Buy American” requirement and are competitively priced. For other public housing agencies, however, this provision was not a concern. For example, two agencies stated they had revised their procurement policy to include “Buy American” requirements, while another agency required its contractors to certify the materials they use are American-made. An additional potential challenge that some officials had identified involved the requirements HUD had placed on agencies in order to use Recovery Act funds for administration. HUD’s guidance states that public housing agencies may use 10 percent of their grant funds for administration but that agencies can only draw down 10 percent of each invoice submitted for administration. In addition, one public housing agency official stated that he expected the documentation requirements for drawing down these funds would require so much extra work that he believed it would be better to use non-Recovery Act funds to cover all administration expenses and devote his agency’s entire Recovery Act award to the identified projects. HUD officials stated that these requirements were intended to provide public housing agencies with an incentive to use Recovery Act funds immediately on projects that would create jobs. Troubled housing agencies may also experience delays in obligating and expending Recovery Act funds. Some officials from public housing agencies that HUD has identified as troubled performers in PHAS stated that additional requirements placed on them by HUD had hindered these agencies’ ability to obligate and expend funds as quickly as they believe necessary. At one public housing agency, officials stated that they were designated as troubled because of the physical condition of their housing units and that they were in need of the Recovery Act funding to address these deficiencies. HUD has identified 172 housing agencies as troubled under PHAS that will be subject to increased monitoring for the Recovery Act. These 172 troubled housing agencies have obligated and expended Recovery Act funds at a slower rate than the overall group of housing agencies receiving Recovery Act funding. Specifically, troubled performing public housing agencies were allocated nearly $186 million of Recovery Act funding, and as of June 20, 2009, 61 (35.5 percent) of these housing agencies had obligated $15.1 million (8 percent) and 22 (13 percent) of these housing agencies had drawn down almost $926,000 (0.5 percent). Overall housing agencies have obligated and expended funds at about double this rate. One reason for these delays is the additional monitoring required by HUD for housing agencies that are designated as troubled performers under PHAS. HUD has informed these troubled public housing agencies that for Recovery Act purposes they would receive increased monitoring and placed them in either a high, medium, or low-risk category. Of these 172 troubled housing agencies, 106 (61.6 percent) were considered low-risk troubled, 53 (30.8 percent) were considered medium-risk troubled, and the remaining 13 (7.6 percent) were considered high-risk troubled. HUD has established and is implementing a strategy for monitoring these troubled housing agencies that have received Recovery Act funds. HUD stated to us that they have disseminated this strategy to its field offices and it is currently being administered to the 172 troubled housing agencies. For example, according to HUD, all 172 troubled public housing agencies— regardless of risk category—have been placed on a “zero threshold” status and therefore cannot draw down Recovery Act funds without HUD Field Office approval. HUD stated to us that the ability to place housing agencies on “zero threshold” has always been available, and has been used for housing agencies that have had problems obligating and expending their Capital Fund grants appropriately. However, HUD has stated that housing agencies that are troubled will be subject to additional monitoring and oversight as deemed necessary to ensure proper uses of Recovery Act funds. Specifically, HUD Field Offices notified troubled housing agencies that prior to obligation of Recovery Act funding, all award documents (i.e., solicitations, contracts, or board resolutions, where applicable) must be submitted to their respective Field Office for review. Further, housing agencies that HUD considers to be high-risk troubled are to be assigned to a HUD designated team that will provide additional monitoring, oversight and technical assistance. HUD further stated that the effect of any increased requirements on obligating Recovery Act funds should be short- lived since Recovery Act funds must be obligated within one year, and much of the funds should be obligated in the next few months. The officials with whom we spoke generally did not anticipate that they would face internal challenges with meeting the accelerated obligation and expenditure requirements under the Recovery Act. Several cited the large backlogs of projects that were ready to begin in welcoming the additional funds. In Ohio, Columbus Metropolitan Housing Authority officials stated that they began preparing projects in December 2008 in anticipation of the Recovery Act’s passage. Two of the larger agencies, Tampa Housing Authority and the District of Columbia Housing Authority, stated that they had in place “job order contracting,” which establishes long-term contracts with several contractors for a variety of routine construction projects, and they had found that this strategy aided in their ability to award contracts quickly and begin projects. Similarly, housing agency officials we interviewed generally did not expect to encounter any challenges meeting the Davis-Bacon local prevailing wage requirements because they were used to complying with Davis-Bacon. For public housing agencies, the responsibility for establishing and maintaining internal controls rests with each housing agency and is typically not part of the state’s overall system of internal controls that is discussed in other parts of the report. GAO visited 47 housing agencies in the 16 states plus the District of Columbia to discuss what internal controls were in place to track the appropriate use of Recovery Act funds. The housing agencies stated that they did not anticipate internal control problems as a result of receiving Recovery Act funds because they would use their existing accounting systems to track the use of these funds. They noted that they have experience with tracking funding—including Capital Fund grants awarded prior to the Recovery Act—and would simply add specific funding codes to their system to track the use of the Recovery Act funds. Many housing agencies are subject to the Single Audit requirements that have been discussed in this report. Single Audits provide federal agencies with information on the use of federal funds, internal control deficiencies, and compliance with federal program requirements. In addition, the HUD Inspector General (HUD OIG) conducts audits of individual housing agencies. Although we did not systematically review audit reports for those housing agencies we visited and they did not anticipate problems with monitoring Recovery Act funds, it is important to note that both single audits and HUD OIG’s audits have identified instances of internal control deficiencies and noncompliance with HUD programs—including the Capital Fund grants provided prior to the Recovery Act. In our June 2009 report, we reported that housing agency audits do report findings of inappropriate use and mismanagement of public housing funds, including problems with accounting, documentation, and internal controls. That report recommended that HUD better leverage the information in housing agency audits to identify emerging issues, and evaluate its overall monitoring and oversight processes. In addition to implementing its strategy for monitoring troubled housing agencies, HUD stated to us that they are in the process of developing a strategy to monitor non-troubled housing agencies’ use of Recovery Act funds. Preserving existing jobs, stimulating job creation, and promoting economic recovery are among the Recovery Act’s key objectives. Public housing agencies are taking steps to measure the extent to which Recovery Act funds are achieving these objectives, though agencies are waiting for guidance from HUD. As recipients of Recovery Act funds, public housing agencies are expected to track and report on jobs created and jobs retained through projects funded by the agency. Most public housing agencies told us they plan to collect payroll data from contractors, existing project management systems, or Davis-Bacon wage reports to calculate the number of jobs created and retained. Some of the public housing agencies told us they would include job-measurement requirements in bid specifications, so that prospective contractors would be aware that they would have to measure jobs if they won the bid. Other agencies said they plan to employ agency employees or public housing residents—whose jobs could be easily counted—on projects funded by the Recovery Act. While some public housing agencies viewed calculating jobs created or retained as straightforward, others expressed concerns about the number of work hours defining jobs created or retained. One agency reported that they had hired a third-party firm to provide tracking and reporting services related to the Recovery Act. The firm will provide analyses of construction-related items and contractor payroll records to satisfy the requirement to report on jobs created and retained with Recovery Act funding. Three public housing agencies reported they have not yet made plans to track the effects of Recovery Act funds. Public housing agencies are also taking steps to report on another Recovery Act objective—promoting energy conservation measures. Public housing agencies are selecting projects that they expect will reduce energy costs, support energy efficiency, and decrease usage of electricity and water. For example, one public housing agency plans to replace older appliances with newer, more energy-efficient models. Another agency plans to replace all light bulbs with energy-efficient bulbs. To measure the impact of these projects, several public housing agencies plan to compare utility bills over time to assess the amount of dollar savings realized. One public housing agency official told us that she plans to read the electric meters in the public housing development to determine the change in energy usage. Public housing agencies also plan to track a number of other performance measures. Many public housing agencies told us they regularly track the budget control, timeliness, and quality of work of projects they fund and that they plan to continue tracking these measures with Recovery Act- funded projects. In addition, some public housing agencies monitor the number of contracts they have with minority- and women-owned businesses, and they expect to be able to use Recovery Act-funded projects to continue to meet their goals of contracting with such entities. Lastly, public housing agencies anticipate that they may see improvement in other measures—such as tenant satisfaction, occupancy rates, crime rates, and employment among residents—as a result of the projects funded through Recovery Act funds. For example, one public housing agency official hoped that a new community center in one development will lead to less apartment turnover, less maintenance expense, lower crime, more efficient use of utilities, and more cooperation with residents. Public housing agencies reported that they have not received guidance from HUD on how to measure jobs created and retained. Most public housing agency officials told us they would like guidance on how to accomplish this objective. In the absence of centralized guidance, public housing agencies are following individual strategies to track and report on jobs. OMB’s June 2009 guidance provided this centralized guidance. Quarterly reporting to HUD is another requirement of the Recovery Act. A number of public housing agencies thought that meeting the quarterly reporting requirement could be accomplished because they are already reporting to HUD on a quarterly basis for other programs, such as HOPE VI. Some agencies, however, told us they had neither heard of the quarterly reporting requirement nor received guidance about what was to be included. However, since OMB issued new guidance in June 2009, HUD officials said they are finalizing work on designing and developing a Recovery Act Management and Performance System for reporting jobs created and other effects of the Recovery Act. OMB is also working on a system it plans to have available by October 10, 2009. OMB’s June 2009 guidance clarified the reporting requirements for recipients and sub- recipients. The Edward Byrne Memorial Justice Assistance Grant (JAG) Program within the Department of Justice’s Bureau of Justice Assistance (BJA) provides federal grants to state and local governments for law enforcement and other criminal justice activities, such as corrections and domestic violence programs. The JAG program was established in law in 2006 to, among other things, provide state and local agencies with the flexibility to prioritize and place justice funds where they are most needed to prevent and control crime based on local needs and conditions. JAG funds can be used to support a range of activities in seven broad program areas, including law enforcement; prosecution and courts; crime prevention and education; corrections; drug treatment and enforcement; program planning, evaluation, and technology improvement; and crime victim and witness programs. Within these areas, JAG funds can be used for state and local initiatives, training, personnel, equipment, supplies, contractual support, research, and information systems for criminal justice. The procedure for allocating JAG funds is based on a statutory formula of population and violent crime statistics, in combination with a minimum allocation to ensure that each state and territory receives some funding. Using this formula, 60 percent of a state’s JAG allocation is awarded by BJA directly to the state, which must in turn allocate a formula-based share of those funds—a variable pass-through requirement—to local governments within the state. For Recovery Act JAG funds, the percentage share that states are required to pass through to local governments varies across the 16 states and the District of Columbia (District) in our review, ranging from 36.52 percent (Massachusetts) to 100 percent (District). Further, states may use up to 10 percent of their state award to cover costs associated with administering JAG funds. The remaining 40 percent of funds is awarded directly by BJA to eligible units of local government within the state. Although allocations for JAG funding are determined by formula, state and local governments must apply to BJA to receive JAG funding. Table 11 shows BJA’s Recovery Act JAG state allocations and variable pass-through percentages for the 16 states and the District, as well as BJA’s Recovery Act JAG allocations to localities within the 16 states and the District and total Recovery Act JAG allocations. Federal funding for JAG has fluctuated significantly in recent years. From fiscal years 2007 through 2008, federal JAG appropriations were reduced by about 68 percent, from about $525 million to about $170 million. The Recovery Act provides $2 billion in JAG funds nationwide for state and local governments (see table 12). Using many of its existing grant award and oversight processes and procedures, BJA and the Office of Justice Programs (OJP)—which oversees BJA and establishes minimum standards for grant monitoring— have reported plans and taken steps to oversee, measure, and monitor Recovery Act JAG funds. For example, as part of BJA’s review of applications for JAG funding, BJA reviewed states’ grant funding history with OJP to identify any outstanding audit deficiencies, such as delinquent financial and programmatic reports regarding OJP funding. If any such deficiencies were identified, they were highlighted in the states’ award letter from BJA for Recovery Act JAG funding as special conditions requiring resolution by the state. According to BJA, 4 of the 16 states and the District in our review had at least one special condition requiring resolution that prohibited them from obligating or expending funds until the specific issues were resolved. As of June 30, 2009, 3 of these states and the District had resolved the issues and had received written approval from BJA releasing the funds. OJP is working with the remaining state to resolve the issues to release the special conditions. With respect to monitoring grants once they are awarded, OJP’s plans include, among others, taking steps to track Recovery Act funds and assessing the performance of projects funded by these grants. For example, OJP’s financial system allows it to track grantees’ use of funds by program and project code, where project codes align with a grantee’s program areas. As of June 30, 2009, project codes have been developed for the JAG program for Recovery Act funds. In addition, OJP plans to conduct programmatic, administrative, and financial monitoring of its Recovery Act grantees. This monitoring, among other activities, includes ongoing reviews of grantee compliance with program guidelines, as well as on-site monitoring of grantee performance. OJP has reported plans to conduct on-site monitoring of no less than 30 percent of open, active Recovery Act grant funding. Further, the Office of Audit, Assessment, and Management, within OJP, plans to collaborate with BJA to update monitoring procedures. For example, the office plans to develop guidance that focuses on monitoring Recovery Act grants by July 31, 2009. In addition, this office plans to complete quarterly reports on grantee data, such as reporting compliance with requirements to submit performance measure data and how grantees are obligating funds, to identify grantees not complying with reporting requirements or program guidelines to enable timely follow-up with grantees to correct such deficiencies. In addition to other available courses, OJP plans to develop Web-accessible training for grantees, which are to cover topics such as Recovery Act reporting requirements, writing grant applications, and an orientation for new grantees. OJP also reported that it facilitated training sessions in the spring of 2009 for its employees on topics such as grant fraud detection and how to create grant award packages, and it has plans to facilitate training on monitoring Recovery Act grantees during fiscal year 2009. In addition to two performance measures on the number of jobs created and preserved that are to be collected under the Recovery Act, BJA requires JAG grantees to report on additional performance measures for the specific activities that apply to the programs being funded through the Recovery Act. As of June 30, 2009, OJP has updated JAG program performance measures for grants awarded with Recovery Act funds. For example, if JAG Recovery funds are used to support a drug treatment program, the grantee would be required to report on the number of participants who completed the program, among other measures. BJA requires that these reports be submitted by grant recipients within 30 days after the end of each quarter. OJP has also developed an online performance measurement tool for JAG grantees to use to report these data, which it anticipates JAG fund recipients can begin to use to report on updated measures in July 2009. As of June 30, 2009, all 16 states and the District in our review have received their state award letter from BJA. Further, as of that date, 8 states reported having obligated a share of these funds: Arizona (about $23.1 million obligated, or about 91 percent of its state Colorado (about $13,700 obligated, or about .08 percent of its state award), Florida (about $8,300 obligated, or about .01 percent of its state award), Illinois (about $12.4 million obligated, or about 25 percent of its state award), Massachusetts (about $12.7 million obligated, or about 51 percent of its Michigan (about $41.2 million obligated, or 100 percent of its state award), Mississippi (about $57,000 obligated, or about 0.5 percent of its state Texas (about $4.6 million obligated, or about 5 percent of its state award). The remaining 8 states and the District reported that no state Recovery Act JAG funds had yet been obligated. According to officials from the states’ administering agencies (SAA), who are responsible for, among other things, administering and setting priorities for the use of JAG funds for the state, they are in various stages of finalizing how these funds will be used—primarily the portion that is to be passed through to local entities, or subrecipients. Specifically: Four states are early in the request for proposal (RFP) process for local entities to apply for state pass-through funds. For example, Mississippi and New Jersey are developing their RFPs, while Pennsylvania and Illinois are beginning to collect proposals. New Jersey officials stated they are in the process of developing RFPs for local jurisdictions while Mississippi officials similarly stated they plan to have a final RFP done in time to make awards by August 1, 2009. Pennsylvania issued its RFP on June 18, 2009, and plans to collect proposals from local entities until July 24, 2009, while Illinois officials stated they plan to begin soliciting applications from local law enforcement agencies in the first part of July 2009 and plan to notify applicants of funding recommendations in early August 2009. Eight states—Colorado, Florida, Georgia, Iowa, Massachusetts, New York, Ohio, and Texas—and the District of Columbia have received applications or letters of intent submitted by local entities for pass-through funding and are in the process of reviewing and in some cases also approving them. For example, according to Colorado officials, the state received 193 applications and is reviewing them for allowable costs, budgets, and a description of how the funds are to help create or retain jobs, among other items. Staff are also ranking the applications in preparation for their presentation and scoring by the state’s JAG board in early July 2009. In Massachusetts, an official noted the state is in different stages of reviewing and finalizing agreements with state agencies that are to receive a share of JAG funds and, for some funds, are awaiting final processing through the state comptroller. In Ohio, officials stated they are performing compliance reviews on the more than 500 applications received for JAG funding and plan to notify subrecipients of their awards by July 31, 2009. Three states have selected potential projects for funding and are awaiting final governing body approval. For example, according to state officials in California, the Legislature must approve the planning document for how JAG funds are to be used in the state in order for funds to be allocated to local agencies, and this approval has not yet occurred as of June 30, 2009. In North Carolina, the SAA has selected 85 eligible projects for JAG funding and is awaiting approval by the governor to proceed with allocating those funds. Similarly, in Michigan, an official stated that recommendations for grant awards have been sent to the Governor’s office for final approval and that contracts are to have a July 1, 2009, start date. One state has finalized and approved a list of projects to receive the state’s JAG award. Specifically, Arizona has selected and approved 36 projects that are to receive state Recovery Act JAG funds, and subrecipients are to have those funds available on July 1, 2009. However, all 16 states and the District of Columbia have reported uses for their state Recovery Act JAG awards that are consistent with their states’ priorities and allowable uses of those funds, as determined by BJA. Table 13 shows planned uses of these funds for the 16 states and the District. BJA is in the process of reviewing and processing applications from local governments for Recovery Act JAG funding. The solicitation for this funding was closed on June 17, 2009. As of June 30, 2009, BJA has awarded about 44 percent of allocated funds to local governments within the 16 states (see table 14). BJA officials stated they intend to award all of these local JAG funds by September 30, 2009. While Recovery Act JAG funds are calculated and administered using the same rules and structure of the existing JAG program, the Recovery Act introduces some new requirements for recipients. For example, recipients are required to track performance measures on the number of jobs created and preserved as a result of Recovery Act funds and must report certain financial and programmatic information—such as the amount of Recovery Act funds expended or obligated and an evaluation of the project’s completion status—to the Recovery Act central reporting Web site 10 days after the end of each quarter. Officials from several of the 17 state administering agencies we visited noted concerns about subrecipients’ ability to meet the act’s reporting requirements for determining the number of jobs created and preserved, and the majority noted challenges to meeting the 10-day deadline for submitting quarterly reports on Recovery Act data. For example, state officials noted the need for additional guidance on how to determine whether JAG funds are contributing to job creation or job preservation. Specifically, officials in three states raised questions about how, if at all, grantees were to measure jobs that may be indirectly related to JAG fund expenditures. For example, if a grantee purchased three new police cruisers, how might it determine how many secondary jobs were retained or created at the car manufacturer. On June 22, 2009, the Office of Management and Budget (OMB) issued guidance on, among other things, how to report on job creation performance measures, which included clarification that recipients should not attempt to report on the employment impact on material suppliers and central service providers (i.e., indirect jobs) that may be related to Recovery Act supported activities. Further, officials from the majority of states shared concerns over the Recovery Act requirement that recipients submit reports within 10 days of the end of each quarter. In previous years, JAG award recipients were required to provide programmatic reports to BJA on an annual basis— rather than on a quarterly basis, as required by the Recovery Act. Specifically, state officials were concerned that subrecipients would not be able to meet that deadline or that they may do so at the risk of quality and accuracy of reporting. For example, officials in North Carolina stated they were concerned about programs, specifically first-time subrecipients from nonprofit and faith-based organizations, not being prepared for compliance responsibilities, due to limitations in the numbers and experience of staff that are to complete the reports. Officials stated that many of the subrecipients’ offices do not have the resources to prepare detailed reporting documents. Officials in Iowa expressed similar concerns about the 10-day reporting requirement and noted that some potential recipients—small law enforcement agencies with five or fewer officers or staff—may not apply for Recovery Act funds if they believe the reporting requirements are burdensome relative to the amount of JAG funds they might receive. Alternatively, officials noted that some recipients may choose to apply for funds and then spend them quickly because the reporting requirement ends after the funds have been expended, reported on, and the grant closed. Officials stated they are concerned about the accuracy of the information the administering agencies are to receive if the data are reported so quickly. For example, officials in Michigan noted that to meet performance measurement reporting on time, subrecipients are to submit reports within 5 days of the end of the quarter to allow time for the state administering agency to prepare and submit these reports. Officials in North Carolina noted that with an increased number of localities receiving the awards compared with previous years, compliance with tracking and consolidating reporting requirements is expected to be more difficult. BJA officials stated they recognized these concerns and agreed that states may face challenges should they have hundreds of subrecipients for pass-through funds. To help facilitate subrecipients in meeting the reporting requirements, officials in many of the states and the District described plans to prepare entities for reporting, such as conducting training, implementing Web- based reporting, and clarifying the requirements with potential subrecipients. For instance, officials in North Carolina stated they plan to sponsor workshops to provide additional information about the Recovery Act reporting requirements to potential subrecipients. Officials in Illinois stated that while they had some concerns about timely reporting, they plan to require subrecipients to report on a monthly basis to the SAA, conduct training for subrecipients, and transition to an electronic system to facilitate tracking and reporting of funds. The District of Columbia and states in our review reported they plan to use existing grants management processes to ensure that subrecipients are using JAG funds in accordance with BJA and Recovery Act requirements, as can be seen in the following examples: Arizona SAA officials reported that as an established process they used a peer-reviewed, risk-based scoring matrix to select subrecipients that considered, among other things, the applicant’s most recent Single Audit results, plans for evaluating the impact resulting from the use of such funds, and funding history with the SAA including any past compliance issues. Once grants are awarded, SAA officials stated that they have a compliance team of six staff that are to perform ongoing financial and programmatic compliance reviews to ensure that subrecipients comply with grant guidance. For example, program compliance staff are to review subrecipients’ monthly and quarterly financial reports and identify any areas of concern, such as if funds are expended too slowly or too quickly, if there are questionable expenses, or if monthly and quarterly reports do not agree. Financial compliance staff are to also perform annual on-site visits that include financial audits in addition to internal controls inspections of, among other things, the accounting system and key financial documentation. Officials estimated that the workload is likely to double as a result of receiving additional funds through the Recovery Act and plan to use some of the state’s administrative JAG funds to hire additional staff to help manage the heightened Recovery Act requirements and increased number of subrecipients. District of Columbia SAA officials reported that they have established programmatic and financial procedures for separately tracking and reporting on all federal grant funding programs. The SAA requires subrecipients to provide detailed, separate monthly or quarterly financial reports on their federal funding that includes supporting documentation on all expenses. These financial reports and reimbursement requests are tracked separately by the SAA in a grants management database as well as through the District’s financial system; additionally, the Office of the Chief Financial Officer is responsible for completing separate financial reports on each federal grant and for drawing down funds in line with grant expenditures. New Jersey SAA officials reported that they plan to monitor the use of JAG funds in several ways. First, the SAA plans to track expenditures through a separate code in its accounting system for Recovery Act funds, as required by the state and federal government. Second, the SAA plans to educate subrecipients on how to comply with funding rules by holding postaward conferences with subrecipients prior to the receipt of funds. Subsequently, subrecipients are to be required to submit monthly financial and programmatic reports to the SAA. Internally, the SAA plans to use existing program and fiscal analysts to track spending and compliance with financial and programmatic requirements. Officials said that they are exploring ways to increase the number of staff monitoring subrecipients, but because New Jersey is under a hiring freeze, any increase in staff to conduct this monitoring would likely come as a result of reassignments from other agencies or offices. Finally, SAA officials noted that an audit by the Office of the State Auditor should provide another layer of review regarding the use of JAG Recovery Act funds. Texas SAA officials report that they plan to monitor performance and financial aspects of awarded funds to ensure that funds are used for authorized purposes. Also, the SAA, in coordination with the Office of the Governor’s Financial Services Division, plans to able to account for, track, and report on federal funds resulting from the Recovery Act separately from other fund sources. According to the SAA officials, this will allow each award to be directly tied to accounting codes to give the Governor’s office the ability to account for, track, and report separately on these funds. Texas also contracts with the Public Policy Research Institute at Texas A&M University to maintain a Web-based data collection system that can retrieve and analyze program performance data, and the state plans to continue to do so to support Recovery Act reporting requirements. The Recovery Act appropriated $5 billion over a 3-year period for the Weatherization Assistance Program, which the U.S. Department of Energy (DOE) administers through each of the states, the District of Columbia (District), and seven territories and Indian tribes. According to DOE, during the past 32 years, the program has assisted more than 6.2 million low-income families to reduce their utility bills by making long-term energy-efficiency improvements to homes. For example, by installing insulation, sealing leaks around doors and windows, or modernizing heating equipment, the weatherization program allows these households to spend their money on more pressing family needs. The Recovery Act appropriation represents a significant increase for a program that has received about $225 million per year in recent years. In response to the Recovery Act, DOE announced on March 12, 2009, that the 50 states, the District, and seven U.S. territories and Indian tribes are eligible to receive weatherization formula grants. Each of the 16 states and the District in our review submitted an initial grant application. As shown in table 15, DOE then provided each with an initial 10 percent of its formula funds with the stipulation that the funds could be used only for such start-up activities as preparing a state weatherization plan, hiring and training staff, and purchasing needed equipment but could not be used for the production of weatherized homes. Subsequently, on June 9, 2009, DOE lifted this prohibition for local agencies that have previously provided services and are included in a state’s plan, in response to states’ concern that their local agencies were ready to begin weatherization activities but lacked funding. Most of the states reported that they have used little if any of the initial 10 percent allocation of Recovery Act funds. In fact, some state weatherization agencies have not received any of their DOE allocation because the funds are being held at the state level. For example, Georgia has not spent the 10 percent allocation because the action plan required by the governor is still under review. In Pennsylvania, the funds must be appropriated through the state budget process, and the budget has not yet been approved. Other states decided not to use the funds until July 1, 2009 for a variety of reasons. Illinois waited until July 1 to begin spending the weatherization funds because of DOE’s initial guidance that funds could not be used for weatherization production activities. Massachusetts did not spend any of the initial allocation until the beginning of the state’s fiscal year on July 1. Furthermore, as of June 30, 2009, Florida reports obligating $113,000 of its $17.6 million initial allocation for start-up activities, such as hiring and training staff. All of the states in our review submitted state weatherization plans to DOE by May 12, 2009. State officials told us that DOE’s funding announcement and e-mail messages had provided them with the guidance needed to complete their weatherization plans, which outline the states’ plans for using the weatherization funds and for monitoring and measuring performance, among other things. DOE’s goal is to approve 80 percent of all state weatherization plans by the end of July 2009. DOE is providing the next 40 percent of weatherization funds to a state once the weatherization plan is approved. DOE plans to release the final 50 percent of the funding to each state based on the department’s progress reviews examining each state’s performance in spending its first 50 percent of the funds and the state’s compliance with the Recovery Act’s reporting and other requirements. As shown in table 16, as of June 30, 2009, DOE had approved the state weatherization plans for Arizona, California, Florida, Georgia, Illinois, Mississippi, New York, North Carolina, Ohio, and the District, enabling them to receive the next 40 percent of their funds. Most states expect DOE approval of their plans by mid-July. However, the timing of DOE’s approval could be an issue for some states. For example, Colorado officials in the Governor’s Energy Office expressed concern about the timing of DOE’s approval because their plan is designed to begin on July 1, the beginning of the state’s fiscal year. DOE’s June 9 revised guidance provided the states with some additional flexibility for using the initial 10 percent of funds. DOE’s continued communication with the states on the timing of the approval of state plans will be important in minimizing possible disruptions of states’ efforts to implement their weatherization programs. In addition, officials in nine of the states in our review expressed concern that the Recovery Act requires that weatherization contractors and subcontractors pay their laborers and mechanics at the locally prevailing wage rates, as determined by the U.S. Secretary of Labor. Because prior DOE weatherization funding did not have this requirement, questions have been raised about how the requirement should be implemented. For example, it creates the possibility that workers could be paid at different wage rates for the same work, depending on the source of funds. Pennsylvania officials noted that local community action agencies may have difficulty tracking the number of hours worked by employees who perform tasks at both prevailing and nonprevailing wage rates. We will continue to monitor the implementation of this requirement. As shown in table 17, each of the states in our review has provided its plans for using its Recovery Act weatherization allocation by breaking expenditures into program operations, administration, training and technical assistance, and other activities. All of the states propose to spend at least 50 percent of their allocation on program operations, ranging from 53 percent in California to 90 percent in Massachusetts. According to DOE, variances among the states in the percentage of funds devoted to program operations reflect different levels of maturity in, for example, providing the infrastructure needed to achieve the administration’s overall goal of weatherizing 1 million houses per year. DOE’s funding announcement directs the states to report on the number of housing units weatherized, the resulting energy savings, and the number of jobs created. Table 18 shows the number of housing units that states expect to weatherize using Recovery Act funds, according to states’ weatherization plans. While many of the weatherization plans estimate expected energy savings, they do not use a consistent unit of measurement or time frame. Few of the states’ weatherization plans present an estimate of the expected jobs created. DOE officials told us that OMB will issue additional guidance to the states regarding a consistent methodology for making this calculation. The Office of Management and Budget estimates that, in addition to the existing federal grants to states and territories, federal obligations of Recovery Act funds for states and territories will be about $149 billion in federal fiscal year 2009. Federal grants represented the second-largest share of funding for state and local governments in 2008 (about 20 percent or $388 billion). As shown in figure 12, state and local tax receipts constituted the largest share of funding for state and local governments in 2008 (about 68 percent or $1.3 trillion). State revenue continued to decline and states used Recovery Act funding to reduce some of their planned budget cuts and tax increases to close current and anticipated budget shortfalls for fiscal years 2009 and 2010. Of the 16 states and the District, 15 estimate fiscal year 2009 general fund revenue collections will be less than in the previous fiscal year. For example, in Georgia, the state’s net revenue collections for May 2009 were 14.4 percent less than they were in May 2008, representing a decrease of approximately $212 million in total tax and other collections. On May 28, 2009, the lower-than-expected revenue projections led the Governor to instruct the Office of Planning and Budget to reduce available funds by 25 percent for the month of June (the last month of fiscal year 2009). In Michigan, fiscal year 2008-2009 revenue collections are estimated to be $1.9 billion—or 20.6 percent—less than fiscal year 2007-2008 collections, putting current revenue estimates below 1971 levels, when adjusted for inflation. The 2 remaining states —Iowa and North Carolina—had revenues that were lower than projected. As shown in figure 13, data from the Bureau of Economic Analysis (BEA) also indicate that the rate of state and local revenue growth has generally declined since the second quarter of 2005, and the rate of growth has been negative in the fourth quarter of 2008 and the first quarter of 2009. Officials in most of the selected states and the District expect these revenue trends to contribute to budget gaps (estimated revenues less than estimated disbursements) anticipated for future fiscal years. All of the 16 states and the District forecasted budget gaps in state fiscal year 2009-2010 before budget actions were taken. New York’s enacted budget for fiscal year 2009-2010 closed what state officials described as the largest budget gap ever faced by the state. The combined New York current services budget gaps totaled $2.2 billion in fiscal year 2008-2009 and $17.9 billion in 2009-2010 before the state instituted corrective budget actions and received Recovery Act funding. In California, the governor projects a $24.3 billion budget gap in fiscal years 2008-2009 and 2009-2010, created in large part by lower revenue estimates. Florida, which recently passed a $66.5 billion budget for the state’s 2009-2010 fiscal year, faced what officials estimated as a $4.8 billion gap in general funds before corrective budget actions were taken. Consistent with one of the purposes of the act, states’ use of Recovery Act funds to stabilize their budgets helped them minimize and avoid reductions in services as well as tax increases. States took a number of actions to balance their budgets in fiscal year 2009-2010, including staff layoffs, furloughs, and program cuts. The use of Recovery Act funds affected the size and scope of some states’ budgeting decisions, and many of the selected states reported they would have had to make further cuts to services and programs without the receipt of Recovery Act funds. For example, California, Colorado, Georgia, Illinois, Massachusetts, Michigan, New York, and Pennsylvania budget officials all stated that current or future budget cuts would have been deeper without the receipt of Recovery Act funds. Recovery Act funds helped cushion the impact of states’ planned budget actions but officials also cautioned that current revenue estimates indicate that additional state actions will be needed to balance future-year budgets. Future actions to stabilize state budgets will require continued awareness of the maintenance-of-effort (MOE) requirements for some federal programs funded by the Recovery Act. For example, Massachusetts officials expressed concerns regarding MOE requirements attached to federal programs, including those funded through the Recovery Act, as future across-the-board spending reductions could pose challenges for maintaining spending levels in these programs. State officials said that MOE requirements that require maintaining spending levels based upon prior-year fixed dollar amounts will pose more of a challenge than upholding spending levels based upon a percentage of program spending relative to total state budget expenditures. States’ current uses of Recovery Act funds helped fund and maintain staffing for existing programs. In Arizona, state budget officials said that Recovery Act funding enabled the state to, among other things, reduce the number of furloughs and layoffs, avoid some service reductions, maintain the level of state employee benefit levels, and prevent some contract delays and reductions that otherwise would have occurred. Similarly, officials in Mississippi plan to use Recovery Act funds to help Mississippi stabilize its budget and support local governments, particularly school districts. For example, officials at the two local education agencies and three institutions of higher education we visited told us that they plan to use Recovery Act funds to avoid layoffs and hire new staff. Officials in the District told us that because they knew the Recovery Act funds were coming while they were developing the fiscal year 2010 budget, they did not have to create a budget scenario in which additional actions, such as furloughs, were necessary to fill the anticipated revenue gap. Similarly, Colorado officials also knew early on that Recovery Act funds were coming—particularly the increased federal share of Medicaid—thereby making state funds that would have been used to pay the state share of Medicaid available for avoiding certain budget actions including additional furloughs. In New Jersey, although budget officials anticipated receiving Recovery Act funds before the state finalized its 2010 budget, this did not preclude the state from including personnel actions such as furloughs and wage freezes to aid in closing the projected budget gap. In Iowa, for the fiscal year 2009 budget, Recovery Act funding allowed state agencies to avoid program cuts as well as mandatory layoffs and furloughs. In addition to these budget actions, some states also reported accelerating their use of Recovery Act funds to stabilize deteriorating budgets. For example, in Georgia, lower-than-expected revenue numbers caused the state to use more Recovery Act funds in state fiscal year 2009 than it had anticipated using. In Massachusetts, state officials said that accelerating their use of Recovery Act and state rainy-day funds was the most viable solution to balance their budget. Massachusetts officials reported that the state had hoped to leave a sizable amount of its State Fiscal Stabilization Fund (SFSF) allocation available for 2011 but changed its planned approach because of its deteriorating fiscal condition. Using more of these funds in the 2008-2009 state fiscal year may make it more difficult for the state to balance its budget after Recovery Act funds are no longer available. California’s dire fiscal condition prompted the state to accelerate the use of its Recovery Act funds, along with the use of a number of additional measures to reduce the state’s 2008-2009 budget gap. Many states, such as Colorado, Florida, Georgia, Iowa, New Jersey, and North Carolina, also reported tapping into their reserve or rainy-day funds in order to balance their budgets. In most cases, the receipt of Recovery Act funds did not prevent the selected states from tapping into their reserve funds, but a few states reported that without the receipt of Recovery Act funds, withdrawals from reserve funds would have been greater. Officials from Georgia stated that although they have already used reserve funds to balance their fiscal year 2009 and 2010 budgets, they may use additional reserve funds if, at the end of fiscal year 2009, revenues are lower than the most recent projections. In contrast, New York officials stated they were able to avoid tapping into the state’s reserve funds due to the funds made available as a result of the increased Medicaid FMAP funds provided by the Recovery Act. States’ approaches to developing exit strategies for the use of Recovery Act funds reflect the balanced-budget requirements in place for all of our selected states and the District. Budget officials referred to the temporary nature of the funds and fiscal challenges expected to extend beyond the timing of funds provided by the Recovery Act. Officials discussed a desire to avoid what they referred to as the “cliff effect” associated with the dates when Recovery Act funding ends for various federal programs. Budget officials in some of the selected states are preparing for the end of Recovery Act funding by using funds for nonrecurring expenditures and hiring limited-term positions to avoid creating long-term liabilities. Representatives of the Texas Governor’s office also told us that their office has advised state agencies that much of the funding is temporary. The Texas Legislature provided similar guidance in the conference committee report for the appropriations bill directing state agencies to “give priority to expenditures that do not recur beyond the 2010-2011 biennium.” In Ohio, budget officials remain focused on budgeting for the coming biennia (2010-2011), but key legislators have queried state officials during budget deliberations about their plans for the next biennia (2012- 2013), when federal Recovery Act funding is no longer available. A few states reported that although they are developing preliminary plans for the phasing out of Recovery Act funds, further planning has been delayed until revenue and expenditure projections are finalized. For example, while Georgia’s Governor has encouraged state agencies to spend funds judiciously and take into consideration that the funding is temporary, the state is still in the process of developing a strategy for winding down its use of Recovery Act funds. In part, such a strategy is dependent on revenue and expenditure projections, which will be updated as part of the fiscal year 2011 budget planning process. In addition, risk mitigation plans currently being developed by state agencies may impact the state’s exit strategy. Some states are in the process of developing exit strategies aligned with planning for broader fiscal challenges. In North Carolina, the state’s recovery office hired a temporary staff person to look at some of the factors that may have caused the state’s economic slowdown, as well as to help plan for an exit strategy after Recovery Act funds end. Officials in Illinois also said that they plan to convene a working group to assess state agencies’ level of preparedness for the end of Recovery Act funding. They have issued guidance to state agencies regarding the use of the funds and have directed agencies to submit hiring plans containing provisions that mitigate the risk of layoffs, such as hiring temporary employees and contractors. Given that Recovery Act funds are to be distributed quickly, effective internal controls over use of funds are critical to help ensure effective and efficient use of resources, compliance with laws and regulations, and in achieving accountability over Recovery Act programs. Internal controls include management and program policies, procedures, and guidance that help ensure effective and efficient use of resources; compliance with laws and regulations; prevention and detection of fraud, waste, and abuse; and the reliability of financial reporting. Management is responsible for the design and implementation of internal controls and the states in our review have a range of approaches for implementing their internal controls. Some states have internal control requirements in their state statutes, while others have undertaken internal control programs as management initiatives. In our sample, seven states—California, Colorado, Florida, Michigan, Mississippi, New York, and North Carolina—noted they have statutory requirements for internal control programs and activities. The other nine states—Arizona, Georgia, Illinois, Iowa, Massachusetts, New Jersey, Ohio, Pennsylvania, and Texas—noted they have undertaken various internal control programs. In addition, the District of Columbia has taken limited actions related to its internal control program. An effective internal control program helps in managing change to cope with shifting environments and evolving demands and priorities, as the Recovery Act entails. Internal controls need to be continually assessed and evaluated by management as programs change and entities strive to improve operational processes. Risk assessment and monitoring are key elements of internal controls, and the states and the District in our review have undertaken a variety of actions in the area of risk assessment. Risk assessment involves performing comprehensive reviews and analyses of program operations to determine if internal and external risks exist and to evaluate the nature and extent of risks identified. Approaches to risk analysis can vary across organizations because of differences in missions and the methodologies used to qualitatively and quantitatively assign risk levels. Monitoring activities include the systemic process of reviewing the effectiveness of the operation of the internal control system. These activities are conducted by management, oversight entities, and internal and external auditors. Monitoring enables stakeholders to determine whether the internal control system continues to operate effectively over time. It also improves the organization’s overall effectiveness and efficiency by providing timely evidence of changes that have occurred, or might need to occur, in the way the internal control system addresses evolving or changing risks. Monitoring also provides information and feedback to the risk assessment process. In California, the Office of State Audits and Evaluations (OSAE) has primary responsibility for reviewing whether state agencies receiving Recovery Act funds have established adequate systems of internal control to maintain accountability over those funds. According to state officials, OSAE is using two primary approaches to assessing internal controls at agencies receiving Recovery Act funds— Financial Integrity and State Manager’s Accountability Act of 1983 (FISMA) reviews (an existing internal control assessment tool) and readiness reviews (a new internal control assessment tool). Both the FISMA reviews and the readiness reviews rely primarily on information that is self-certified by agency officials. FISMA requires each state agency to maintain effective systems of internal accounting and administrative control, to evaluate the effectiveness of these controls on an ongoing basis, and to biennially review and prepare a report on the adequacy of the agency’s systems of internal accounting and administrative control. The state of Colorado enacted the State Department Financial Responsibility and Accountability Act in 1988, which which requires each principal department of the state’s executive department to institute and maintain systems of internal accounting and administrative control— including an effective process of internal review and for making adjustments for changing conditions. The act also requires the head of each principal department to annually state in writing whether the department’s systems of internal accounting and control either do or do not fully comply with the act’s requirements. While the Controller’s office ensures that these statements are filed every year, historically, the Controller has not had the resources to ensure that proper internal controls are in place. The Controller’s office is developing an internal control toolkit that will provide state departments with information on internal control systems and checklists to formalize and improve their existing processes and identify potential weaknesses. In addition, the Controller’s office is in the process of filling its internal auditor position, which has been vacant for over 2 years. According to the Controller, the auditor will work with state departments to promote and monitor internal controls, as well as monitor proper tracking and reporting of Recovery Act funds. Florida law also places the responsibility for internal controls on state agencies. A Florida statute requires the agencies to establish and maintain management systems and controls that promote and encourage compliance; economic, efficient, and effective operations; reliability of records and reports; and safeguarding of assets. However, while Florida law requires state agencies to have such internal controls, the state oversight agencies are preparing for the infusion of Recovery Act funds into the state. Annually, the Florida Department of Financial Services’ obtains representation letters from agency heads stating that they are responsible for establishing and maintaining effective controls over financial reporting and preventing and detecting fraud for all funds administered by their agency. Department of Financial Services’ officials stated that, this year, they will ask the agency heads to also to sign a separate representation letter for Recovery Act funds that says internal controls are in place for Recovery Act funds and that these funds will be tracked separately from other funds. New York State also enacted into law internal control requirements. The law requires, among other things, that each agency establish and maintain a system of internal controls and a review program, designate an internal control officer, and periodically evaluate the need for an internal audit function in each agency. In addition, to fulfill the requirements of the New York State Government Accountability, Audit and Internal Control Act (New York Internal Control Act), the Office of the State Comptroller is responsible for developing the Standards for Internal Control in New York State Government. The Internal Control Act requires that the State Division of the Budget (DOB) periodically (1) issue a list of agencies covered by the act, and (2) issue a list of agencies required to have an internal audit function. Beyond these two statutory requirements, DOB has also taken administrative steps to facilitate and support the goals of the Internal Control Act through the issuance of additional guidance and the annual internal control certification requirement. Based on DOB’s Governmental Internal Control and Internal Audit Requirements manual, the system of internal control should be developed using the Committee of Sponsoring Organizations of the Treadway Commission (COSO) conceptual framework and should incorporate COSO’s five basic components of internal control. North Carolina has enacted the State Governmental Accountability and Internal Control Act, requiring the Office of the State Controller to establish statewide internal control standards. The Office of the State Controller is implementing a statewide internal control program called EAGLE (Enhancing Accountability in Government through Leadership and Education). The purpose was not only to establish adequate internal control, but also to increase fiscal accountability within state government. North Carolina is using a phased approached to implement the EAGLE program. In Phase I, state agencies and state universities are required to perform an annual assessment of internal control over financial reporting. This risk assessment is seen as a benefit to the agencies as it identifies risks and compensating controls that reduce the possibility of material misstatements of financial reports and misappropriation of assets, as well as opportunities to increase efficiency and control effectiveness in business processes and operations. In January 2008, the State Controller requested each agency to appoint an Internal Control Officer to lead the agency’s risk assessment team and monitor the agency’s compliance with EAGLE requirements. Phase II of the program will be “efficiency of operations” and Phase III will be “compliance with laws and regulations.” In accordance with Mississippi’s statutory requirement to maintain continuous internal audit over the activities of each state agency, Mississippi has implemented a program of internal control. First, Mississippi has required each state agency to certify in writing that it has conducted an evaluation of internal controls and that the findings of the evaluation provide reasonable assurance that the assets of the agency have been preserved, duties have been segregated by function, and transactions are executed in accordance with laws of the state of Mississippi. As part of maintaining appropriate controls, the Department of Finance and Administration directed all state agency executive and finance directors to conduct a comprehensive review of their agency’s internal control structure to determine if it is functioning properly and in accordance with the agency’s internal control plan; determine whether the internal control structure has been updated to address operational or procedural changes made during the period under review to processes, program areas, or functions; identify internal control weaknesses; initiate actions to ensure that control weaknesses discovered during the period under review, and in prior periods, have been adequately addressed; and give immediate attention to all internal control related findings and recommendations reported by auditors during the year. Second, in addition to the certification required of all state agencies, the Department of Finance and Administration is requiring another certification of agencies receiving Recovery Act funds. Agencies must certify that they accept responsibility for spending the funds as responsibly and effectively as possible while maintaining the appropriate controls and reporting mechanisms to ensure accountability and transparency in compliance with the Recovery Act. The certifications also include an agency’s guarantee that program risks are, or will be, identified and that the agency has, or will, implement internal controls sufficient to mitigate the risk of waste, fraud, and abuse. Finally, the Department of Finance and Administration established an internal control unit that is reviewing agency letters of certification and expects to weigh all agencies internal control assessments, as well as the findings and corrective action plans noted by the State Auditor in the 2007 and 2008 Mississippi Single Audit Report, to decide which agencies receiving Recovery Act funds should initially be the focus of the unit’s monitoring activities. Although not based on a statutory requirement, Georgia is taking steps to monitor and safeguard Recovery Act funds at the state and program level. Georgia has established a Recovery Act Accountability and Transparency Support Team comprising representatives from the Office of Planning and Budget, State Accounting Office, and Department of Administrative Services (the department responsible for procurement). In May 2009, the Georgia Office of Planning and Budget issued a risk management handbook to all state agencies. Its purpose is to provide a process that allows agencies to identify potential Recovery Act risk areas and develop risk mitigation strategies for each individual funding source. The State Accounting Office developed the agency self-assessment questionnaire that accompanied the risk management handbook. This survey included questions about compiling Recovery Act data for reporting purposes, the specific contracting requirements in the Recovery Act that are not current agency practices, and agency internal controls. The State Accounting Office plans to use the results to target its audit efforts. Ohio has made strides in refining its internal control process to accommodate the Recovery Act funds. The state Office of Budget and Management issued guidance on risk assessment in March 2009 highlighting the significance of risk mitigation strategies that all state agencies should have in place to ensure management controls are operating effectively to identify and prevent wasteful spending and minimize waste, fraud, and abuse. The new Office of Internal Audit is working with state agencies to develop and evaluate these risk assessments. Based on these agency risk assessments, the Office of Internal Audit is developing an oversight strategy that it will present to the Audit Committee on June 30, 2009. Although the District of Columbia (District) government and agencies have internal controls, the controls are not consolidated into a citywide internal control program, and past reports have identified numerous weaknesses in the District’s internal controls. The District’s Office of Inspector General (OIG) has issued reports that identified weaknesses in the District’s internal controls and made several recommendations to improve internal controls. One report recommends that the Chief Financial Officer (CFO), in conjunction with the City Administrator, issue citywide guidance requiring managers to establish, assess, correct, and report on internal controls and that these requirements should be reflected in personnel performance plans. In addition, the fiscal year 2007 Single Audit report for the District of Columbia identified 89 material weaknesses in internal controls over both financial reporting and compliance with requirements applicable to major federal programs. The Single Audit report identified material weaknesses in compliance with requirements applicable to major federal programs, including Medicaid’s FMAP, ESEA Title I Education grants, and Workforce Investment Act programs, all of which are receiving Recovery Act funds. The findings were significant enough to result in a qualified opinion for that section of the report. In September 2008, the Office of the Chief Financial Officer (OCFO) contracted with an independent accounting firm to identify areas with internal control problems and deficiencies in the office. The review may help direct OCFO in developing an internal control program. The assessments will not be available until the end of 2009. When the firm has completed its OCFO assessment, it will expand its review to District agencies. States and localities receiving Recovery Act funds directly from federal agencies are responsible for tracking and reporting on those Recovery Act funds. An effective internal control program is critical to preparing reliable financial statements and other financial reports. OMB has issued guidance to the states and localities that provides for separate identification “tagging” of Recovery Act funds, so that specific reports can be created and transactions can be specifically identified as Recovery Act funds. The flow of federal funds to the states varies by programs. As we have previously reported, the grant programs generally have different objectives and strategies that are reflected in their application, selection, monitoring, and reporting processes. Multiple federal entities are involved in grants administration; the grantor agencies have varied grants management processes; the grantee groups are diverse; and grants themselves vary substantially in their types, purposes, and administrative requirements. The federal grant system is highly fragmented. Several states and the District of Columbia have created unique codes for their financial systems in order to tag the Recovery Act funds. District of Columbia’s Office of Finance and Treasury (OFT) has established a bank account exclusively for depositing Recovery Act funds. Most states plan to use their current financial system to track and report on Recovery Act funds, but various challenges exist. For instance, since the state of Arizona is decentralized, the recording and tracking responsibility lies with the state agencies that have different accounting systems. The state agencies will need to periodically transfer accounting data from the agencies’ systems to the state’s system. Georgia is segregating funds through a set of Recovery Act fund sources in the state’s financial accounting system. Georgia’s State Accounting Office issued guidance on Recovery Act accounting that states those state agencies such as the Georgia Department of Labor that do not use the state’s financial accounting system must ensure that the data are maintained in accordance with all Recovery Act financial reporting requirements. California’s Recovery Task Force (Task Force), which has overarching responsibility for ensuring that California’s Recovery Act funds are spent efficiently and effectively, intends to use its existing internal control and oversight structure, with some enhancements, to maintain accountability for Recovery Act funds. State agencies, housing agencies, and other local Recovery Act funding recipients we interviewed all told us that using separate accounting codes within their existing accounting systems will enable them to effectively track Recovery Act funds. However, officials told us that accumulating this information at the statewide level will be difficult using existing mechanisms. The state, which is currently relying on lengthy manually updated spreadsheets, is awaiting additional federal OMB guidance to design and implement a new system to effectively track and report statewide Recovery Act funds. Most state and local program officials told us they will apply existing controls and oversight processes that they currently apply to other program funds to oversee Recovery Act funds. Officials from the Texas State Comptroller’s Office repeated their concern in May 2009 that the federal government was not identifying Recovery Act funds separately from other federal funds disbursed to the state. Absent this identification, the Comptroller relies on state agencies to distinguish between the two types of federal funds. Texas officials cited federal fund transfers to the Texas Workforce Commission and the Texas Health and Human Services Commission as examples of this fund identification problem. Absent separate coding from the U.S. Department of the Treasury, the Texas officials said the state relies on the state agencies to inform the State Comptroller’s office on what portion of federal funds are Recovery Act funds. The Texas officials commented that it would be helpful if the federal government put in place the coding structure to identify Recovery Act funds separately from other federal funds—as they believe the Recovery Act requires—before Recovery Act funds are disbursed to Texas. State agency officials told us they do not share the Comptroller’s concern because they are able to distinguish between their normal federal funds and Recovery Act funds when initiating fund transfers. The District of Columbia has also experienced a challenge. District of Columbia’s Office of Finance and Treasury (OFT) has established a bank account exclusively for depositing Recovery Act funds. Agencies are notified by OFT when Recovery Act funds are received in the bank account. All Recovery Act revenue received will be tracked by OFT in a separate database. When Recovery Act funds are ready to be distributed from federal agencies to District agencies, Recovery Act grant funding notifications are sent directly to the District agencies. When an agency receives a grant funding notification, it is the agency’s responsibility to report the receipt to the Office of Budget and Planning (OBP). OBP provides weekly reports of grant funding notifications that are reconciled by the agencies. OBP stated there is a disparity of grant information caused by the process and is working on a solution. Mississippi is undergoing changes to most of the state’s central accounting and reporting systems. The Department of Finance and Administration (DFA) is making changes to the Statewide Automated Accounting System (SAAS), which tracks purchasing, accounts payables, revenues, and accounts receivable and includes Mississippi’s general ledger. The use of reporting categories does not allow DFA to currently tie individual obligations or expenditures to the contract for which they were incurred. However, DFA is in the process of making modifications to the state central accounting system that will allow the system to do so. Once completed, these changes will provide greater transparency of Recovery Act fund usage. For example, the changes will allow the public to view online Recovery Act contracts and expenditures for specific contracts. In addition, the changes will add further system controls, such as the ability to deny the obligation of funds until state agencies have posted the contract that supports the obligation. In addition to being an important accountability mechanism, the results of audits can provide valuable information for management’s risk assessment and monitoring processes. The Single Audit report, prepared to meet the requirements of the Single Audit Act, as amended (Single Audit Act), is a source of information on internal control and compliance findings and the underlying causes and risks. The report is prepared in accordance with OMB’s implementing guidance in OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations, which provides guidance to auditors on selecting federal programs for audit and the related internal control and compliance audit procedures to be performed. A Single Audit report includes the auditor’s schedule of findings and questioned costs, internal control and compliance deficiencies, and the auditee’s corrective action plans and a summary of prior audit findings that includes planned and completed corrective actions. The Single Audit Act requires that a nonfederal entity subject to the act transmit its reporting package to a federal clearinghouse designated by OMB 9 months after the end of the period audited. In our April 2009 report, we reported that the guidance and criteria in OMB Circular No. A-133 do not adequately address the substantial added risks posed by the new Recovery Act funding. Such risks may result from (1) new government programs, (2) the sudden increase in funds or programs that are new to the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. With some adjustment, the Single Audit could be an effective oversight tool for Recovery Act programs, addressing risks associated with all three of these factors. Our April report included recommendations that OMB adjust the current audit process to focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low- risk programs to balance new audit responsibilities associated with the Recovery Act. Since April, OMB has taken several steps in response to our recommendations. However, those actions do not sufficiently address the risks leading to our recommendations. In OMB’s view, it is limited in its options to address our concerns due to specific requirements set forth in the Single Audit Act. The Single Audit Act charges OMB with, among other things, prescribing the risk-based criteria auditors use to select federal programs to include for Single Audit compliance and internal controls testing. To focus auditor risk assessments on Recovery Act-funded programs and to provide guidance on internal control reviews for Recovery Act programs, OMB is working within the framework defined by existing mechanisms—Circular No. A-133 and the Compliance Supplement. In this context, OMB has made limited adjustments to its Single Audit guidance and is planning to issue additional guidance. Following is the status of OMB’s actions related to our April recommendations. In our April report, we recommended that OMB focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding. On May 26, OMB made available the 2009 edition of the Circular A-133 Compliance Supplement. The new Compliance Supplement includes the following, which is intended to focus auditor risk assessment on Recovery Act funding: A requirement that auditors specifically ask auditees about and be alert to expenditure of funds provided by the Recovery Act. An appendix that highlights some areas of the Recovery Act impacting single audits. The appendix adds a requirement that large programs and program clusters with Recovery Act funding cannot be assessed as low risk for the purposes of program selection without clear documentation of the reasons that the expenditures of Recovery Act awards are low risk for the program. The appendix also states that recipients are to separately identify expenditures for Recovery Act programs on the Schedule of Expenditures of Federal Awards. It also notes that compliance requirements unique to Recovery Act-funded programs are not included in the Compliance Supplement and advises auditors to review award documents, check the OMB Web site for addenda to the supplement, and use the existing Compliance Supplement framework as guidance to identify material Recovery Act compliance requirements. OMB has not yet identified program groupings critical to auditors’ selection of programs to be audited for compliance with program requirements. As we reported in April 2009, the current approach prescribed by OMB Circular No. A-133 relies heavily on the amount of federal expenditures in a program during a fiscal year and whether findings were reported in the previous period to determine whether detailed compliance testing is required for that year. In some cases, OMB requires that auditors group closely related programs that share common compliance requirements and consider them as one program when selecting programs for testing. OMB specifically identifies these groups of programs, called “clusters,” in the Compliance Supplement. OMB has noted that many of the Recovery Act awards will share common compliance requirements with existing programs and that the Compliance Supplement cluster list will be updated to include Recovery Act programs. OMB is currently considering ways to cluster programs for Single Audit selection in ways that would make it more likely that Recovery Act programs would be selected and, therefore, be subjected to internal control and compliance testing, but the dollar formulas would not change under this plan. This approach may not provide sufficient assurance that smaller, but nonetheless significant, Recovery Act-funded programs would be selected for audit. OMB plans to issue the new cluster information by mid-July 2009. In addition, the 2009 Compliance Supplement to OMB’s Circular No. A-133 does not yet provide specific auditor guidance for new programs funded by the Recovery Act or for new compliance requirements specific to Recovery Act funding within existing programs, that may be selected as major programs for audit. For instance, there is currently no program- specific audit guidance included in the Compliance Supplement on the new State Fiscal Stabilization Fund programs, significant programs administered by the Department of Education to support education and other government services, with federal funds already flowing to the states. OMB acknowledges that additional guidance is called for and is in the process of drafting such guidance. OMB plans to issue an addendum to the Compliance Supplement that would address some Recovery Act- related compliance requirements by mid-July 2009. In our April 2009 report, we recommended that OMB adjust the current Single Audit process to provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010. To provide additional focus on internal control reviews, OMB has drafted guidance that indicates the importance of such reviews and encourages auditors to communicate weaknesses to management early in the audit process but does not add requirements for auditors to take these steps. Because OMB is choosing to address this recommendation through the existing audit framework, it has not changed the reporting time frames and therefore does not address our concern that internal controls over Recovery Act programs should be reviewed before significant funding is expended. OMB plans to finalize and issue the guidance by mid-July 2009. In addition, the guidance to be provided by OMB will be limited to those programs selected by the auditor as “major programs” under the current approach for selecting programs for audit, which may not adequately consider Recovery Act program risks. Finally, if this internal control work is done within the current Single Audit framework and reporting timelines, the auditor evaluation of internal control and related reporting will occur too late—after significant levels of federal expenditures have already occurred. In our April 2009 report, we recommended that the Director of OMB evaluate options for providing relief related to audit requirements for low- risk programs to balance new audit responsibilities associated with the Recovery Act. While OMB has noted the increased responsibilities falling on those responsible for performing Single Audits, to date it has not issued any proposals and does not have plans to address this recommendation. A recent survey conducted by the staff of the National State Auditors Association (NSAA) highlighted the need for relief to overburdened state audit organizations. Survey participants were asked whether they were experiencing cuts in staffing and to comment on the effects of these cuts on their ability to perform effective audits. Thirty-two state audit organizations that indicated in an earlier survey that their responsibilities included Single Audit had responded to the survey as of June 24. Of the 32 respondents, 17 indicated that staff had been cut by 5 percent or more. Eight respondents are anticipating that staff will be required to take unpaid leave in fiscal year 2010. OMB officials told us they are considering reducing auditor workload by decreasing the number of risk assessments of smaller federal programs. Auditors conduct these risk assessments as part of the planning process to identify which federal programs will be subject to detailed internal control and compliance testing. GAO believes that this step in itself will not provide sufficient relief to balance the additional audit requirements for Recovery Act programs. OMB officials have expressed reluctance to revise OMB Circular No. A-133 or to propose revisions to the Single Audit Act to provide auditor relief or to provide additional flexibility to allow auditors to have more control over the selection of programs tested for internal control and compliance. They stated that to do so would take considerable time and could not be accomplished in time to have adequate coverage of Recovery Act funds. In addition, federal inspectors general have expressed concern about reducing audit coverage of existing programs. However, without action now, audit coverage of Recovery Act programs will not be sufficient to address Recovery Act risks, and the audit reporting that does occur will be after significant Recovery Act funds have already been expended. Congress is currently considering a bill, H.R. 2182, that could provide some financial relief to auditors lacking the staff capacity necessary to handle the increased audit responsibilities associated with the Recovery Act. H.R. 2182 would amend the Recovery Act to provide for enhanced state and local oversight of activities conducted pursuant to the Recovery Act. As passed by the House, H.R. 2182 would allow state and local governments to set aside 0.5 percent of Recovery Act funds, in addition to funds already allocated to administrative expenditures, to conduct planning and oversight to prevent and detect waste, fraud, and abuse. The Single Audit reporting deadline is too late to provide audit results in time for the audited entity to take action on deficiencies noted in Recovery Act programs. The Single Audit Act requires that recipients submit their Single Audit reports to the federal government no later than 9 months after the end of the period being audited. As a result an audited entity may not receive feedback needed to correct an identified internal control or compliance weakness until the latter part of the subsequent fiscal year. For example, states that have a fiscal year end of June 30 have a reporting deadline of March 31, which leaves program management only 3 months to take corrective action on any audit findings before the end of the subsequent fiscal year. For Recovery Act programs, significant expenditure of funds could occur during the period prior to the audit report being issued. The timing problem is exacerbated by the extensions to the 9-month deadline that are routinely granted by the awarding agencies, consistent with OMB guidance. For example, 13 of the 17 states in our review have a June 30 fiscal year end. We found that 7 of these 13 states requested and received extensions for their March 31, 2009, submission requirement of their fiscal year 2008 reporting package. Three of the requests for extensions were from auditors, and the remaining requests were from the audited entities. Table 19 below lists the seven states, the extension date requested, and the reason for the extension. The Health and Human Services Office of Inspector General (HHS OIG) is the cognizant agency for most of the states, including all of the states selected for our review under the Recovery Act. According to an HHS OIG official, states contact HHS requesting and providing a reason for an extension of their report submission; the HHS IG has had a practice of routinely granting the requested extensions. The HHS OIG noted that the IG has no means to enforce compliance with the reporting time frames. The program office of the federal agency issuing the federal awards, not the cognizant IG, has the authority at the federal level to impose sanctions when the state or local government has not complied with the audit requirement. According to an HHS OIG official, beginning in May 2009, the HHS IG adopted a policy of no longer approving requests for extensions of the due dates for Single Audit reporting package submissions. OMB officials have stated they plan to eliminate allowing extensions of the reporting package but have not issued any official guidance or memorandums to the agencies, OIGs, or federal award recipients. In order to realize the Single Audit’s full potential as an effective Recovery Act oversight tool, OMB needs to take additional action to focus auditors’ efforts on areas that can provide the most efficient, and most timely, results. OMB has taken some first steps, and has plans to issue additional guidance. As federal funding of Recovery Act programs accelerates in the next few months, we are particularly concerned that the Single Audit process may not provide the timely accountability and focus needed to assist recipients in making necessary adjustments to internal controls, so that they achieve sufficient strength and capacity to provide assurances that the money is being spent as effectively as possible to meet program objectives. Legislative changes may be necessary to make certain changes to the Single Audit requirements to address the new risks brought on by Recovery Act funding if OMB concludes that it is unable to take the necessary steps under the current framework to adequately address accountability for the Recovery Act programs and related risks and to provide for more timely reporting, especially in the area of internal controls. Given that the scope of Single Audit workloads will increase, consideration should be given to determining what funds can be used to support Single Audit efforts related to Recovery Act programs, including whether legislative changes are needed to specifically direct resources to cover incremental audit costs related to Recovery Act programs. Under the Recovery Act, direct recipients of Recovery Act funds, including states and localities, are expected to report quarterly on a number of measures, including the use of funds and an estimate of the number of jobs created and the number of jobs retained. The jobs created and jobs retained are part of the “recipient reports” required under section 1512(c) of the Recovery Act and will be submitted by recipients starting in October 2009. In addition to this statutory requirement to report on jobs, the Office of Management and Budget (OMB) has directed federal agencies to collect other performance information from entities that receive funding. To the extent possible, OMB’s guidance also requires agencies to instruct recipients to collect and report performance information that is consistent with the agency’s program performance measures. This is intended to allow an assessment of what OMB describes as the marginal performance impact of Recovery Act requirements. In general, states are adapting information systems, issuing guidance, and beginning to collect data on jobs created and jobs retained, but questions remain about how to count jobs and measure performance under Recovery Act-funded programs. For example, many state education officials told us it has been difficult to plan how they will report the impact of Recovery Act funding until they receive further guidance from OMB or the Department of Education. Education is planning to supplement the guidance OMB provided to help state agencies report the proper data. In particular, Education officials noted that draft OMB guidance on recipient reporting would require some additional Education guidance to clarify issues for recipients of formula grants, such as the IDEA grants. OMB’s latest guidance on recipient reporting addresses some of these concerns. In response to requests for more guidance on the recipient reporting process and required data, OMB, after soliciting responses from an array of stakeholders, issued additional implementing guidance for recipient reporting on June 22, 2009. As discussed in our April 2009 report and echoed in this report, state and local officials had questions and concerns about the recipient reporting requirements contained in the Recovery Act. For example, officials had expressed some confusion about how to count less than full-time jobs and indirect jobs. Over the last several months OMB met regularly with state and local officials, federal agencies, and others to gather input on the reporting requirements and implementation guidance. OMB also worked with the Recovery Accountability and Transparency Board to design a nationwide data collection system that will reduce information reporting burdens on recipients by simplifying reporting instructions and providing a user-friendly mechanism for submitting required data; OMB will be testing this system in July. This latest guidance attempts to address these concerns through additional details and clarification of previous guidance. In its June 22 guidance, OMB endeavors to (1) dispel some of the confusion related to reporting on jobs created and retained versus the macroeconomic impact of the Recovery Act, (2) clarify which recipients of Recovery Act funds are required to report under the act, and (3) provide additional detail on how to calculate jobs created and retained. The new guidance articulates once again that under the Recovery Act, there are two distinct types of job reports. First, the Council of Economic Advisers (CEA), in consultation with OMB and the Department of the Treasury, is required to submit quarterly reports to Congress that detail the impact of programs funded through the Recovery Act on employment, economic growth, and other key economic indicators. In order to fulfill this mandate, CEA has developed macroeconomic methodologies to estimate employment effects for both the national and state levels. These macro- level employment estimates will attempt to capture the full employment impact of the Recovery Act. OMB and federal agencies will coordinate with CEA on these quarterly reports and other questions regarding macro- level jobs estimates. The second type of job report is part of the “recipient reports” required under section 1512 of the Recovery Act. Specifically, section 1512(c)(3)(D) requires recipients of Recovery Act funds to report an estimate of the direct jobs created or retained by the Recovery Act project or activity. These reporting requirements apply only to nonfederal recipients of funding, including all entities receiving Recovery Act funds directly from the federal government such as state and local governments, private companies, educational institutions, nonprofits, and other private organizations. However, the recipient reporting requirement only covers a defined subset of the Recovery Act’s funding. OMB’s guidance, consistent with the statutory language in the Recovery Act, states that these reporting requirements apply to recipients who receive funding through discretionary appropriations, not recipients receiving funds through entitlement programs, such as Medicaid, or tax programs. Recipient reporting also does not apply to individuals. These reports are to provide information on direct job creation and retention, and OMB expects they will be useful in the overall estimation and evaluation of the employment effects of the Recovery Act, such as the employment reporting undertaken by CEA. The OMB guidance also clarifies that recipients of Recovery Act funds are required to report only on jobs directly created or retained by Recovery Act-funded projects, activities, and contracts. Recipients are not expected to report on the employment impact on materials suppliers (“indirect” jobs) or on the local community (“induced” jobs). According to OMB, recipients are to report only direct jobs because they may not have sufficient insight or consistent methodologies for reporting indirect or induced jobs. OMB notes this broader view of the overall employment impact of the Recovery Act will be covered in the estimates generated by CEA using a macro-economic approach. According to CEA, it will consider the direct jobs created and retained reported by recipients to supplement its analysis. The new OMB guidance also provides additional instruction on how to estimate the number of jobs created and retained by Recovery Act funding. The guidance explains that the number of jobs created or retained should be expressed as “full-time equivalents” (FTE), which is calculated as total hours worked in jobs funded by the Recovery Act divided by the number of hours in a full-time schedule, as defined by the recipient. This calculation is designed to increase consistency in reported data by converting part-time and temporary jobs into FTE-jobs. By doing so, it seeks to avoid overstating the number of jobs that could occur through other methods or reporting of part-time, partial-time, or nonpermanent jobs. The new guidance restates from earlier guidance the definitions of jobs created and jobs retained. According to OMB guidance, a job created is a new position created and filled or an existing unfilled position that is filled as a result of the Recovery Act; a job retained is an existing position that would have been eliminated were it not for Recovery Act funding. A job cannot be counted as both created and retained, and only compensated employment in the United States should be counted. OMB’s guidance for reporting on job creation aims to shed light on the immediate uses of Recovery Act funding; however, reports from recipients of Recovery Act funds must be interpreted with care. For example, accurate, consistent reports will only reflect a portion of the likely impact of the Recovery Act on national employment, since Recovery Act resources are also made available directly through tax cuts and benefit payments. Some of the questions and concerns raised by state and local officials about the recipient reporting requirements centered on the reporting logistics and information technology requirements. For example, California and District of Columbia officials said they were awaiting final guidance on the data standards before finalizing their reporting approaches. Officials from several states said they modified, or are assessing whether they can modify, existing reporting systems for Recovery Act reporting. The new OMB guidance should answer many of these questions as it describes in detail the reporting model to be used for recipient reporting. The information reported by all prime recipients (and subrecipients to which the prime recipient has delegated reporting responsibility) will be submitted through www.federalreporting.gov, an online Web portal designed to collect all Recovery Act recipient reports. All recipient reports will be made available on www.recovery.gov and, as appropriate, on individual federal agency recovery Web sites. The guidance also provides documentation of the data model for recipient reporting that includes a reporting template, a data dictionary, and an Extensible Markup Language (XML) schema for electronic data submissions. The reporting template is a simple spreadsheet table that enables manual data entry and collection of recipient reporting information in a familiar spreadsheet format. The data dictionary describes the data elements specifically required for recipient reporting under the Recovery Act. Our initial assessment of the technical specifications and framework of the recipient reporting model suggests that this is a reasonable approach. The pilot testing scheduled for July will provide additional information about potential technical and reporting challenges. It is likely that there will be challenges associated with data quality, including timeliness and completeness of submissions as well as the technical ability of some recipients to develop solutions (including processes and procedures) for capturing, tracking, and submitting the required data on a consistent basis. We will continue to monitor and assess OMB’s recipient reporting model and July pilot test. OMB guidance described recipient reporting requirements under the Recovery Act’s section 1512 as the minimum that must be collected, leaving it to federal agencies to determine additional information that would be required for oversight of individual programs. OMB has instructed federal agencies to develop formal documented plans for how Recovery Act funds will be used and managed that are consistent with sound program management principles. According to the guidance, agencies must describe how they are coordinating broad Recovery Act efforts toward successful implementation and monitoring of performance and results in a comprehensive “agency plan.” Officials from some states indicated they would use existing program indicators and, in some cases, build on these indicators to measure performance. Officials also expressed a desire for additional guidance from federal agencies on what performance measures to use. As instructed by OMB, each Recovery Act federal agency plan must describe the program’s Recovery Act objectives and relationships with corresponding goals and objectives through ongoing agency programs and activities. OMB states that expected public benefits should demonstrate cost-effectiveness and be clearly stated in concise, clear, and plain language targeted to an audience with no in-depth knowledge of the program. Furthermore, OMB guidance states that, to the extent possible, Recovery Act goals should be expressed in the same terms as programs’ goals in federal departmental strategic plans, and agencies should instruct recipients to collect and report performance information to the extent possible as part of their quarterly submissions. The objective is to use existing measures to allow the public to see the marginal performance impact of Recovery Act investments. Some state program officials have said that they do plan to use existing program performance measures. For example, public housing agencies told us they regularly track the budget control, timeliness, and quality of work of projects they fund and that they plan to continue using these measures with Recovery Act-funded projects. Some other performance measures public housing agencies typically track include tenant satisfaction, occupancy rates, crime rates, and employment among residents. Some states are issuing guidance and modifying their information systems to capture and report on jobs created and retained, but many state and local officials expressed concern about the lack of clear guidance on what other program or impact measures are required for evaluating the impact of Recovery Act funding. Some federal agencies have issued such additional guidance, but others have not. For example, the Department of Transportation (DOT) through the Federal Highway Administration (FHWA) has provided guidance specifying the data to be reported when complying with the requirements in section 1201(c) of division A of the Recovery Act, which stipulates, among other requirements, that each highway infrastructure grant recipient submit periodic reports on the use of the funds. For example, California state transportation officials said that contractors will be required to report on the number of workers and payroll amounts on a monthly basis to the California Department of Transportation. The state office will then provide the data to the FHWA California division office, which will provide it to FHWA headquarters. DOT said that grantees will not be expected to estimate employment data other than the direct on-site jobs and noted that the reporting to FHWA is in addition to the recipient reporting to OMB. DOT economists in coordination with CEA plan to compute the number of indirect jobs and induced jobs using direct on-site job data provided by the state transportation departments. OMB guidance also states that federal agencies must have a process in place for senior managers to regularly review the progress and performance of major programs. To achieve this objective, OMB has encouraged federal agencies to leverage their existing Senior Management Councils to oversee Recovery Act performance. OMB states that the councils should review Recovery Act reporting and performance across each agency; establish and oversee development and implementation of agency guidance to identify and mitigate risk; and ensure the correction of weaknesses relating to the Recovery Act. According to OMB, the councils should analyze findings and results from quarterly or monthly performance reviews, coordinated by the agency’s Performance Improvement Officer, to help determine the highest-risk program areas and ensure corrective actions are implemented. OMB’s new guidance on the implementation of recipient reporting should be helpful in addressing answers to many of the questions and concerns raised by state and local program officials. However, a number of the issues were covered in previous guidance, and some concerns remain. For example, the counting of part-time employment was covered to some extent in previous guidance but continued to be mentioned by officials in some states. Overall, state and local officials were clearly aware of the requirements to report on jobs created and jobs retained, but questions persist on how to do this. For example, public housing agencies reported they have not received guidance from HUD on how to measure jobs created and retained or other performance measures. Most public housing agency officials told us they would like guidance on how to accomplish these objectives. Similarly, Education officials noted that draft OMB guidance on recipient reporting would require some additional Education guidance to clarify issues for recipients of formula grants, such as special education IDEA grants. In sum, federal agencies may need to do a better job of communicating the OMB guidance in a timely manner to their state counterparts and, as appropriate, issue clarifying guidance on required performance measurement. In particular, while any additional guidance for recipients must be in accordance with OMB guidance, OMB could work with federal agencies to provide program-specific examples about how to count jobs created and jobs retained. This would be especially helpful for programs that have not previously tracked and reported such metrics, such as with public housing and special education grants. Other channels to educate state and local program officials on the reporting requirements could be considered, including Web- or telephone-based information sessions or other forums. Since enactment of the Recovery Act in February 2009, OMB has issued three sets of guidance—on February 18, April 3 and, most recently, June 22, 2009 —to announce spending and performance reporting requirements to assist prime recipients and subrecipients of federal Recovery Act funds to comply with these requirements. OMB has reached out to Congress, federal, state, and local government officials, grant and contract recipients, and the accountability community to get a broad perspective on what is needed to meet the high expectations set by Congress and the administration. Further, according to OMB’s June guidance, OMB has worked with the Recovery Accountability and Transparency Board to deploy a nationwide data collection system at www.federalreporting.gov. As work proceeds on the implementation of the Recovery Act, OMB and the cognizant federal agencies have opportunities to build on the early efforts by continuing to address several important issues. These issues can be placed broadly into three categories, which have been revised from our last report to better reflect evolving events since April: (1) accountability and transparency requirements, (2) reporting on impact, (3) communications and guidance. Recipients of Recovery Act funding face a number of implementation challenges in this area. The act includes new programs and significant increases in funds out of normal cycles and processes. There is an expectation that many programs and projects will be delivered faster so as to inject funds into the economy, and the administration has indicated its intent to assure transparency and accountability over the use of Recovery Act funds. Issues regarding the Single Audit process and administrative support and oversight are important. Single Audit: The Single Audit process needs adjustments to provide appropriate risk-based focus and the necessary level of accountability over Recovery Act programs in a timely manner. In our April 2009 report, we reported that the guidance and criteria in OMB Circular No. A-133 do not adequately address the substantial added risks posed by the new Recovery Act funding. Such risks may result from (1) new government programs, (2) the sudden increase in funds or programs that are new to the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. With some adjustment, the Single Audit could be an effective oversight tool for Recovery Act programs because it can address risks associated with all three of these factors. April report recommendations: Our April report included recommendations that OMB adjust the current audit process to focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low- risk programs to balance new audit responsibilities associated with the Recovery Act. Status of April report recommendations: OMB has taken some actions and has other planned actions to help focus the program selection risk assessment on Recovery Act programs and to provide guidance on auditors’ reviews of internal controls for those programs. However, we remain concerned that OMB’s planned actions would not achieve the level of accountability needed to effectively respond to Recovery Act risks and does not provide for timely reporting on internal controls for Recovery Act programs. Therefore, we are re-emphasizing our previous recommendations in this area. To help auditors with single audit responsibilities meet the increased demands imposed on them by Recovery Act funding, we recommend that the Director of OMB take the following four actions: Develop requirements for reporting on internal controls during 2009 before significant Recovery Act expenditures occur as well as ongoing reporting after the initial report. Provide more focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with high risk have audit coverage in the area of internal controls and compliance. Evaluate options for providing relief related to audit requirements for low- risk programs to balance new audit responsibilities associated with the Recovery Act. To the extent that options for auditor relief are not provided, develop mechanisms to help fund the additional Single Audit costs and efforts for auditing Recovery Act programs. Administrative Support and Oversight: States have been concerned about the burden imposed by new requirements, increased accounting and management workloads, and strains on information systems and staff capacity at a time when they are under severe budgetary stress. April report recommendation: In our April report, we recommended that the Director of OMB clarify what Recovery Act funds can be used to support state efforts to ensure accountability and oversight, especially in light of enhanced oversight and coordination requirements. Status of April report recommendation: On May 11, 2009, OMB released a memorandum clarifying how state grantees could recover administrative costs of Recovery Act activities. Because a significant portion of Recovery Act expenditures will be in the form of federal grants and awards, the Single Audit process could be used as a key accountability tool over these funds. However, the Single Audit Act, enacted in 1984 and most recently amended in 1996, did not contemplate the risks associated with the current environment where large amounts of federal awards are being expended quickly through new programs, greatly expanded programs, and existing programs. The current Single Audit process is largely driven by the amount of federal funds expended by a recipient in order to determine which federal programs are subject to compliance and internal control testing. Not only does this model potentially miss smaller programs with high risk, but it also relies on audit reporting 9 months after the end of a grantee’s fiscal year—far too late to pre-emptively correct deficiencies and weaknesses before significant expenditures of federal funds. Congress is considering a legislative proposal in this area and could address the following issues: To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. To the extent that additional audit coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. Under the Recovery Act, responsibility for reporting on jobs created and retained falls to nonfederal recipients of Recovery Act funds. As such, states and localities have a critical role in identifying the degree to which Recovery Act goals are achieved. Performance reporting is broader than the jobs reporting required under section 1512 of the Recovery Act. OMB guidance requires that agencies collect and report performance information consistent with the agency’s program performance measures. As described earlier in this report, some agencies have imposed additional performance measures on projects or activities funded through the Recovery Act. April report recommendation: In our April report, we recommended that given questions raised by many state and local officials about how best to determine both direct and indirect jobs created and retained under the Recovery Act, the Director of OMB should continue OMB’s efforts to identify appropriate methodologies that can be used to (1) assess jobs created and retained from projects funded by the Recovery Act; (2) determine the impact of Recovery Act spending when job creation is indirect; (3) identify those types of programs, projects, or activities that in the past have demonstrated substantial job creation or are considered likely to do so in the future and consider whether the approaches taken to estimate jobs created and jobs retained in these cases can be replicated or adapted to other programs. Status of April report recommendation: OMB has been meeting on a regular basis with state and local officials, federal agencies, and others to gather input on reporting requirements and implementation guidance and has worked with the Recovery Accountability and Transparency Board on a nationwide data collection system. On June 22, OMB issued additional implementation guidance on recipient reporting of jobs created and retained. This guidance is responsive to much of what we said in our April report. It states that there are two different types of jobs reports under the Recovery Act and clarifies that recipient reports are to cover only direct jobs created or retained. “Indirect” jobs (employment impact on suppliers) and “induced” jobs (employment impact on communities) will be covered in Council of Economic Advisers (CEA) quarterly reports on employment, economic growth, and other key economic indicators. Consistent with the statutory language of the act, OMB’s guidance states that these recipient reporting requirements apply to recipients who receive funding through discretionary appropriations, not to those receiving funds through either entitlement or tax programs. These reporting requirements also do not apply to individuals. It clarifies that the prime recipient and not the subrecipient is responsible for reporting section 1512 information on jobs created or retained to the federal government. The June 2009 guidance also provides detailed instructions on how to calculate and report jobs as full-time equivalents (FTE). It also describes in detail the data model and reporting system to be used for the required recipient reporting on jobs. The guidance provided for reporting job creation aims to shed light on the immediate uses of Recovery Act funding and is reasonable in that context. It will be important, however, to interpret the recipient reports with care. As noted in the guidance, these reports are only one of the two distinct types of reports seeking to describe the jobs impact of the Recovery Act. CEA's quarterly reports will cover the impact on employment, economic growth, and other key economic indicators. Further, the recipient reports will not reflect the impact of resources made available through tax provisions or entitlement programs. Recipients are required to report no later than 10 days after the end of the calendar quarter. The first of these reports is due on October 10, 2009. After prime recipients and federal agencies perform data quality checks, detailed recipient reports are to be made available to the public no later than 30 days after the end of the quarter. Initial summary statistics will be available on www.recovery.gov. The guidance explicitly does not mandate a specific methodology for conducting quality reviews. Rather, federal agencies are directed to coordinate the application of definitions of material omission and significant reporting error to “ensure consistency” in the conduct of data quality reviews. Although recipients and federal agency reviewers are required to perform data quality checks, none are required to certify or approve data for publication. It is unclear how any issues identified under data quality reviews would be resolved and how frequently data quality problems would have been identified in the reviews. GAO will continue to monitor this data quality and recipient reporting requirements. Our recommendations: To increase consistency in recipient reporting or jobs created and retained, the Director of OMB should work with federal agencies to have them provide program-specific examples of the application of OMB’s guidance on recipient reporting of jobs created and retained. This would be especially helpful for programs that have not previously tracked and reported such metrics. Because performance reporting is broader than the jobs reporting required by section 1512, the Director of OMB should also work with federal agencies—perhaps through the Senior Management Councils—to clarify what new or existing program performance measures—in addition to jobs created and retained—that recipients should collect and report in order to demonstrate the impact of Recovery Act funding. In addition to providing these additional types of program-specific examples of guidance, the Director of OMB should work with federal agencies to use other channels to educate state and local program officials on reporting requirements, such as Web- or telephone-based information sessions or other forums. Financial funding and program guidance: State officials expressed concerns regarding communication on the release of Recovery Act funds and their inability to determine when to expect federal agency program guidance. Once funds are released there is no easily accessible, real-time procedure for ensuring that appropriate officials in states and localities are notified. Because half of the estimated spending programs in the Recovery Act will be administered by nonfederal entities, states wish to be notified when funds are made available to them for their use as well as when funding is received by other recipients within their state that are not state agencies. OMB does not have a master timeline for issuing federal agency guidance. OMB’s preferred approach is to issue guidance incrementally. This approach potentially produces a more timely response and allows for mid- course corrections; however, this approach also creates uncertainty among state and local recipients responsible for implementing programs. We continue to believe that OMB can strike a better balance between developing timely and responsive guidance and providing a long range time line that gives some structure to state and localities’ planning efforts. We appreciate that the timeline will almost certainty be modified over time as new issues emerge that require additional guidance and clarification. April report recommendation: In our April report, we recommended that to foster timely and efficient communications, the Director of OMB should develop an approach that provides dependable notification to (1) prime recipients in states and localities when funds are made available for their use, (2) states—where the state is not the primary recipient of funds but has a statewide interest in this information—and (3) all nonfederal recipients on planned releases of federal agency guidance and, if known, whether additional guidance or modifications are recommended. Status of April recommendation: OMB has made important progress in the type and level of information provided in its reports on Recovery.gov. Nonetheless, OMB has additional opportunities to more fully address the recommendations we made in April. By providing a standard format across disparate programs, OMB has improved its Funding Notification reports, making it easier for the public to track when funds become available. Agencies update their Funding Notification reports for each program individually whenever they make funds available. Both reports are available on www.recovery.gov. OMB has taken the additional step of disaggregating financial information, i.e., federal obligations and outlays by Recovery Act programs and by state in its Weekly Financial Activity Report. Our recommendation: The Director of OMB should continue to develop and implement an approach that provides easily accessible, real-time notification to (1) prime recipients in states and localities when funds are made available for their use, and (2) states—where the state is not the primary recipient of funds but has a statewide interest in this information. In addition, OMB should provide a long range time line for the release of federal guidance for the benefit of nonfederal recipients responsible for implementing Recovery Act programs. Recipient financial tracking and reporting guidance: In addition to employment related reporting, OMB’s guidance calls for the tracking of funds by the prime recipient, recipient vendors, and subrecipients receiving payments. OMB’s guidance also allows that a “prime recipients may delegate certain reporting requirements to subrecipients.” Either the prime or sub-recipient must report the D-U-N-S number (or an acceptable alternative) for any vendor or sub-recipient receiving payments greater than $25 thousand. In addition, the prime recipient must report what was purchased and the amount, and a total number and amount for sub-awards of less than $25 thousand. By reporting the DUNS number, OMB guidance provides a way to identify subrecipients by project, but this alone does not ensure data quality. The approach to tracking funds is generally consistent with the Federal Funding Accountability and Transparency Act (FFATA). Like the Recovery Act, the FFATA requires a publicly available Web site— www.USAspending.gov—to report financial information about entities awarded federal funds. Yet, significant questions have been raised about the reliability of the data on USAspending.gov, primarily because what is reported by the prime recipients is dependent on the unknown data quality and reporting capabilities of their subrecipients. For example, earlier this year, more than 2 years after passage of FFATA, the Congressional Research Service (CRS) questioned the reliability of the data on USAspending.gov. We share CRS’s concerns associated with USAspending.gov, including incomplete, inaccurate, and other data quality problems. More broadly, these concerns also pertain to recipient financial reporting in accordance with the Recovery Act and its federal reporting vehicle, www.FederalReporting.gov, currently under development. Our recommendation: To strengthen the effort to track the use of funds, the Director of OMB should (1) clarify what constitutes appropriate quality control and reconciliation by prime recipients, especially for subrecipient data, and (2) specify who should best provide formal certification and approval of the data reported. Agency-specific guidance: DOT and FHWA have yet to provide clear guidance regarding how states are to implement the Recovery Act requirement that economically distressed areas (EDA) are to receive priority in the selection of highway projects for funding. We found substantial variation both in how states identified areas in economic distress and how they prioritized project selection for these areas. As a result, it is not clear whether areas most in need are receiving priority in the selection of highway infrastructure projects, as Congress intended. While it is true that states have discretion in selecting and prioritizing projects, it is also important that this goal of the Recovery Act be met. Our recommendation: To ensure states meet Congress’s direction to give areas with the greatest need priority in project selection, the Secretary of Transportation should develop clear guidance on identifying and giving priority to economically distressed areas that are in accordance with the requirements of the Recovery Act and the Public Works and Economic Development Act of 1965, as amended, and more consistent procedures for the Federal Highway Administration to use in reviewing and approving states’ criteria. We received comments on a draft of this report from the Office of Management and Budget (OMB) and the U.S. Department of Transportation (DOT) on our report recommendations. OMB concurs with the overall objectives of GAO’s recommendations made to OMB in this report. OMB offered clarifications regarding the area of Single Audit and did not concur with some of GAO’s conclusions related to communications. What follows summarizes OMB’s comments and our responses. OMB agreed with the overall objectives of GAO’s recommendations and offered clarifications regarding the areas of Single Audit. OMB also noted it believes that the new requirements for more rigorous internal control reviews will yield important short-term benefits and the steps taken by state and local recipients to immediately initiate controls will withstand increased scrutiny later in the process. OMB commented that it has already taken and is planning actions to focus program selection risk assessment on Recovery Act programs and to increase the rigor of state/local internal controls on Recovery Act activities. However, our report points out that OMB has not yet completed critical guidance in these areas. Unless OMB plans to change the risk assessment process conducted for federal programs under Circular A-133, smaller, but significantly risky programs under the Recovery Act may not receive adequate attention and scrutiny under the Single Audit process. OMB acknowledged that acceleration of internal control reviews could cause more work for state auditors, for which OMB and Congress should explore potential options for relief. This is consistent with the recommendations we make in this report. OMB also noted that our draft report did not offer a specific recommendation for achieving acceleration of internal control reporting. Because there are various ways to achieve the objective of early reporting on internal controls, we initially chose not to prescribe a specific method; however, such accelerated reporting could be achieved in various ways. For instance, OMB could require specific internal control certifications from federal award recipients meeting certain criteria as of a specified date, such as December 31, 2009, before significant Recovery Act expenditures occur. Those certifications could then be reviewed by the auditor as part of the regular single audit process. Alternatively, or in addition, OMB could require that the internal control portion of the single audit be completed early, with a report submitted 60 days after the recipient’s year end. We look forward to continuing our dialog with OMB on various options available to achieve the objective of early reporting on internal controls. We will also continue to review OMB’s guidance in the area of single audits as such guidance is being developed. OMB has made important progress relative to some communications. In particular, we agree with OMB’s statements that it requires agencies to post guidance and funding information to agency Recovery Act websites, disseminates guidance broadly, and seeks out and responds to stakeholder input. In addition, OMB is planning a series of interactive forums to offer training and information to Recovery Act recipients on the process and mechanics of recipient reporting and they could also serve as a vehicle for additional communication. Moving forward and building on the progress it has made, OMB can take the following additional steps related to funding notification and guidance. First, OMB should require direct notification to key state officials when funds become available within a state. OMB has improved Funding Notification reports by providing a standard format across disparate programs, making it easier for the public to track when funds become available. However, it does not provide an easily accessible, real-time notification of when funds are available. OMB recognized the shared responsibilities of federal agencies and states in its April 3, 2009 guidance when it noted that federal agencies should expect states to assign a responsible office to oversee data collection to ensure quality, completeness, and timeliness of data submissions for recipient reporting. In return, states have expressed a need to know when funds flow into the state regardless of which level of government or governmental entity within the state receives the funding so that they can meet the accountability objectives of the Recovery Act. We continue to recommend more direct notification to (1) prime recipients in states and localities when funds are made available for their use, and (2) states-where the state is not the primary recipient of funds but has a statewide interest in this information. Second, OMB should provide a long range time line for the release of federal guidance. In an attempt to be responsive to emerging issues and questions from the recipient community, OMB’s preferred approach is to issue guidance incrementally. This approach potentially produces a more timely response and allows for mid-course corrections; however, this approach also creates uncertainty among state and local recipients. State and local officials expressed concerns that this incremental approach hinders their efforts to plan and administer Recovery Act programs. As a result, we continue to believe OMB can strike a better balance between developing timely and responsive guidance and providing some degree of a longer range time line so that states and localities can better anticipate which programs will be affected and when new guidance is likely to be issued. OMB’s consideration of a master schedule and its acknowledgement of the extraordinary proliferation of program guidance in response to Recovery Act requirements seem to support a more structured approach. We appreciate that a longer range time line would need to be flexible so that OMB could also continue to issue guidance and clarifications in a timely manner as new issues and questions emerge. OMB also offered suggestions for several technical clarifications which we have made when appropriate. DOT generally agreed to consider the recommendation that it develop clear guidance on identifying and giving priority to economically distressed areas and more consistent procedures for reviewing and approving states’ criteria. As discussed in the highways section of this report, in commenting on a draft of this report, DOT agreed that states must give priority to projects located in economically distressed areas (EDAs), but said that states must balance all the Recovery Act project selection criteria when selecting projects including giving preference to activities that can be started and completed expeditiously, using funds in a manner that maximizes job creation and economic benefit, and other factors. While we agree with DOT that there is no absolute primacy of EDA projects in the sense that they must always be started first, the specific directives in the act that apply to highway infrastructure are that priority is to be given to projects that can be completed in 3 years, and are located in EDAs. DOT also stated that the basic approach used by selected states to apply alternative criteria is consistent with the Public Works and Economic Development Act and its implementing regulations on EDAs because it makes use of flexibilities provided by the Public Works Act to more accurately reflect changing economic conditions. However the result of DOT’s interpretation would be to allow states to prioritize projects based on criteria that are not mentioned in the highway infrastructure investment portion of the Recovery or the Public Works Acts without the involvement of the Secretary or Department of Commerce. We plan to continue to monitor states’ implementation of the EDA requirements and interagency coordination at the federal level in future reports. We are sending copies of this report to the Office of Management and Budget and the Department of Transportation, as well as sections of the report to officials of the 16 states and the District covered in our review. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-5500. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix describes our objectives, scope, and methodology (OSM) for this second report on our bimonthly reviews on the Recovery Act. A detailed description of the criteria used to select the core group of 16 states and the District of Columbia (District) and programs we reviewed is found in appendix I of our April 2009 Recovery Act bimonthly report. The Recovery Act specifies several roles for GAO, including conducting bimonthly reviews of selected states’ and localities’ use of funds made available under the act. As a result, our objectives for this report were to assess (1) selected states’ and localities’ uses of and planning for Recovery Act funds, (2) the approaches taken by the selected states and localities to ensure accountability for Recovery Act funds, and (3) states’ plans to evaluate the impact of the Recovery Act funds they have received to date. Our teams visited the 16 selected states, the District, and a non-probability sample of 178 localities during May and June 2009. As described in our previous Recovery Act report’s OSM, our teams met again with a variety of state and local officials from executive-level and program offices. During discussions with state and local officials, teams used a series of program review and semistructured interview guides that addressed state plans for management, tracking, and reporting of Recovery Act funds and activities. We also reviewed state constitutions, statutes, legislative proposals, and other state legal materials for this report. Where attributed, we relied on state officials and other state sources for description and interpretation of state legal materials. Appendix II details the states and localities visited by GAO. Criteria used to select localities within our selected states follow. Using criteria described in our last bimonthly report, we selected the following streams of Recovery Act funding flowing to states and localities for review during this report: increased Medicaid Federal Medical Assistance Percentage (FMAP) grant awards; the Federal-Aid Highway Surface Transportation Program; the State Fiscal Stabilization Fund (SFSF); Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA); Parts B and C of the Individuals with Disabilities Education Act (IDEA); the Workforce Investment Act (WIA) Youth program; the Public Housing Capital Fund; Edward Byrne Memorial Justice Assistance Grant (JAG) Program, and the Weatherization Assistance Program. Together, these nine programs are estimated to account for approximately 87 percent of federal Recovery Act outlays to states and localities in fiscal year 2009. We also reviewed how Recovery Act funds are being used by states to stabilize their budgets. In addition, we analyzed www.recovery.gov data on federal spending. For the FMAP grant awards, we again relied on our web-based inquiry, asking the 16 states and the District to update information they had previously provided to us on enrollment, expenditures, and changes to their Medicaid programs and to report their plans to use state funds made available as a result of the increased FMAP. We also reviewed states’ Single Audit results for 2007 and, where available, for 2008, to identify material weaknesses related to the Medicaid programs in the 16 states and the District. In interviews with Medicaid officials from all the sample states and the District, we obtained additional information regarding states’ efforts to comply with the provisions of the Recovery Act, as well as states’ responses to material weaknesses identified in their Single Audits. We spoke with individuals from the Centers for Medicare & Medicaid Services (CMS) regarding their oversight and guidance to states, their FMAP grant awards, and funds drawn down by states. For highway infrastructure investment, we reviewed status reports and guidance to the states and discussed these with the U.S. Department of Transportation (DOT) and Federal Highway Administration (FHWA) officials. We obtained data from FHWA on obligations and reimbursements for the Recovery Act’s highway infrastructure funds nationally and for each of our selected states. We selected two projects in every state that were furthest along—for example, projects under construction or out for bid. In selecting projects, we attempted to select a mix of projects, including projects that were in and outside of economically distressed areas; projects administered by the state and locally administered projects; projects in urban and rural areas; and projects requiring various amounts of funding—both large and small. To obtain information on the impact certain requirements associated with highway funding were having on states, we selected three states—New Jersey, Arizona, and Mississippi—because we did not include these states in the scope of our previous report on this issue and because they have varying environmental planning and labor environments. For example, according to the Council on Environmental Quality, New Jersey has a state environmental planning law similar to the National Environmental Policy Act (NEPA), while Arizona and Mississippi do not, and, according to the Bureau of Labor Statistics, in 2008, union membership in New Jersey was 18.3 percent, while 8.8 percent of Arizona and 5.3 percent of Mississippi workers were union members. To understand how the U.S. Department of Education is implementing the SFSF, ESEA Title I, and IDEA under the Recovery Act, we reviewed relevant laws, guidance, and communications to the states and interviewed Education officials. Our review of related documents and interviews with federal agency officials focused on determining and clarifying how states, school districts, and public institutions of higher education would be expected to implement various provisions of the SFSF. Also, to understand the baseline data being used to demonstrate states’ status related to the assurances states must make about education reform in their SFSF applications, we interviewed Education officials about the data being used and officials at organizations such as Achieve and the Data Quality Campaign, which develop and assess the data. We visited at least two school districts in each state covered by our review to learn the districts’ plans for Recovery Act funds received for SFSF, ESEA Title I, and IDEA. For our visits to school districts, in each state, we selected from school districts that were among the top 10 recipients of Recovery Act funds for the ESEA Title I program and, when possible, included school districts with a high number of schools in improvement and school districts in locales other than large cities. For our visits to public institutions of higher education (IHE), we visited IHEs in Ohio and North Carolina and the six states—California, Florida, Georgia, Illinois, Mississippi, and New York—that had received approval of their applications for State Fiscal Stabilization Funds from Education by the time of our initial site visits in May 2009. For each state, we selected among the largest, in terms of enrollment, public IHEs in the state that would be receiving SFSF funds, including universities and community colleges. In 3 states, we also visited some public historically black colleges and universities. We reviewed the Recovery Act-funded WIA Youth program in 13 of our 16 states (all except Arizona, Colorado, and Iowa) and the District. We focused on state and local efforts to provide summer youth employment activities. To learn about the status of implementation, the use and oversight of funds, and the challenges faced, we interviewed workforce development officials in all 13 states and at least two local areas in each state—for a total of 34 local areas—and the District. We also reviewed relevant documents obtained from state and local officials. In addition, we obtained and analyzed data from the Department of Labor on the amount of Recovery Act WIA Youth funds allotted to, and drawn down by states, and reviewed Labor’s guidance to states and local areas on Recovery Act funds. For Public Housing, we obtained data from HUD’s Electronic Line of Credit and Control System on the amount of Recovery Act funds that have been obligated and/or drawn down by each housing agency in the country, as of June 20, 2009. To determine how housing agencies were using or planning to use these funds, we selected a sample of 47 agencies in our sample of 16 states and the District. At the selected agencies we interviewed housing agency officials and conducted site visits of ongoing or planned Recovery Act projects. GAO selected these locations to obtain a mix of large, medium, and small housing agencies, housing agencies designated as troubled performers by HUD, those to which HUD allocated significant amounts of Recovery Act funding, and housing agencies that had drawn down funds at the time of our selection. We also interviewed HUD Headquarters officials in the District to understand their procedures for monitoring housing agency use of Recovery Act Funds. For our review of the JAG program, we reviewed relevant laws, federal guidance, and states’ grant applications and award letters; interviewed officials with the Department of Justice’s Office of Justice Programs and Bureau of Justice Assistance; and interviewed officials from state administering agencies that oversee the JAG program in their state. We spoke with and reviewed documentation from Department of Justice officials on the agency’s coordination with, guidance to, and oversight of grant recipients. We interviewed state officials and reviewed relevant state documentation to determine and clarify states’ plans for using JAG funds awarded to the state and their progress in using and overseeing those funds. We did not review JAG grants awarded directly to local governments in this report because the Bureau of Justice Assistance’s (BJA) solicitation for local governments closed on June 17; therefore, not all of these funds have been awarded. For the Weatherization Assistance Program, we reviewed relevant laws, regulations, and federal guidance and interviewed Department of Energy officials who administer the program at the federal level. We also coordinated activities with officials from the department’s Office of Inspector General. In addition, we conducted semistructured interviews with officials in the states’ energy agencies that administer the weatherization program. We collected data about each state’s total allocation for weatherization through the Recovery Act, as well as the initial allocation already sent to the states. We asked DOE officials about their timetable for reviewing state energy plans and when they would provide the next allocation to the states. Finally, we reviewed the state weatherization plans to determine how each state intends to allocate their funds and the outcomes they expect. To better understand how states and the District are using Recovery Act funds to stabilize government budgets, we reviewed enacted and proposed state budgets and revenue estimates for state fiscal years 2008-2009 and 2009-2010. In addition, we reviewed reports and analysis regarding state fiscal conditions. We interviewed state budget officials to determine how states are using Recovery Act funds to avoid reductions in essential services or tax increases and developing exit strategies to plan for the end of Recovery Act funding. We also consulted with researchers and associations representing state officials to better understand states’ current fiscal conditions. To determine how states and localities are tracking the receipt of and use of Recovery Act funds, the state and District teams asked cognizant officials to describe the accounting systems and conventions being used to execute transactions and to monitor and report on Recovery Act expenditures. To determine the current internal control structure in states and the District, we asked cognizant officials to describe and provide relevant documentation about their current internal control program, including risk assessment and monitoring. In addition, to assist in the planning of the audit work, we provided the state and District teams, as well as certain program teams, with available fiscal year 2008 Single Audit summary information. Single Audit information was obtained from the Federal Audit Clearinghouse (FAC) Single Audit data collection forms and the Single Audit reports. We discussed with relevant OMB officials the Single Audit reports and guidance. We also analyzed how OMB was addressing the recommendations related to the Single Audit in the April 2009 Recovery Act report. We also asked auditors to address how they were monitoring and overseeing the Recovery Act. To understand the reporting requirements on the impact of the Recovery Act, we reviewed the guidance issued by OMB on February 18, April 3, and June 22, 2009, as well as selective federal agency guidance related to grants and to states and localities. We also interviewed selected state, District, and local officials to understand their views of agency and OMB guidance for reporting on implementation of the Recovery Act, as well as about their plans to provide assessment data required by Section 1512. We collected funding data from www.recovery.gov and federal agencies administering Recovery Act programs for the purpose of providing background information. We used funding data from www.recovery.gov— which is overseen by the Recovery Accountability and Transparency Board—because it is the official source for Recovery Act spending. We collected data on states’ and localities’ plans, uses, and tracking of Recovery Act funds during interviews and follow-up meetings with state and local officials. In addition, we used data collected from state and local officials to report the amount of Recovery Act funding received by states and localities thus far. Based on a preliminary and limited examination of this information, we consider these data sufficiently reliable with attribution to official sources for the purposes of providing background information on Recovery Act funding for this report. Our sample of selected states and localities is not a random selection and therefore cannot be generalized to the total population of state and local governments. We conducted this performance audit from April 21, 2009, to July 2, 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Local school districts: Title I-LEA, IDEA, State Fiscal Stabilization Fund Phoenix Elementary School District No.1 Phoenix Union High School District No. 210 Mesa Unified School District No. 4 Tucson Unified School District No. 1 Imagine Charter Elementary at Desert West Inc. The following staff contributed to this report: Stanley Czerwinski, Denise Fantone, Susan Irving, and Yvonne Jones, (Directors); Thomas James, James McTigue, and Michelle Sager, (Assistant Directors); and Allison Abrams, David Alexander, Judith Ambrose, Peter Anderson, Lydia Araya, Thomas Beall, Sandra Beattie, Jessica Botsford, Karen Burke, Richard Cambosos, Ralph Campbell Jr., Virginia Chanley, Tina Cheng, Marcus Corbin, Sarah Cornetto, Robert Cramer, Michael Derr, Helen Desaulniers, Kevin Dooley, Holly Dye, Abe Dymond, Doreen Feldman, Alice Feldesman, Michele Fejfar, Shannon Finnegan, Alexander Galuten, Ellen Grady, Victoria Green, Brandon Haller, Anita Hamilton, Geoffrey Hamilton, Jackie Hamilton, Tracy Harris, Barbara Hills, David Hooper, Bert Japikse, Stuart Kaufman, Karen Keegan, Nancy Kingsbury, Judith Kordahl, Hannah Laufe, Armetha Liles, John McGrail, Sarah McGrath, Jean McSween, Donna Miller, Kevin Milne, Marc Molino, Susan Offutt, Sarah Prendergast, Brenda Rabinowitz, Carl Ramirez, James Rebbe, Audrey Ruge, Sidney Schwartz, Jena Sinkfield, John Smale Jr., Michael Springer, George Stalcup, Jonathan Stehle, Andrew J. Stephens, Gloria Sutton, Barbara Timmerman, Crystal Wesco, Michelle Woods, and Kimberly Young.
|
This report, the second in response to a mandate under the American Recovery and Reinvestment Act of 2009 (Recovery Act), addresses the following objectives: (1) selected states' and localities' uses of Recovery Act funds, (2) the approaches taken by the selected states and localities to ensure accountability for Recovery Act funds, and (3) states' plans to evaluate the impact of the Recovery Act funds they received. GAO's work for this report is focused on 16 states and certain localities in those jurisdictions as well as the District of Columbia--representing about 65 percent of the U.S. population and two-thirds of the intergovernmental federal assistance available. GAO collected documents and interviewed state and local officials. GAO analyzed federal agency guidance and spoke with Office of Management and Budget (OMB) officials and with relevant program officials at the Centers for Medicare and Medicaid Services (CMS), and the U.S. Departments of Education, Energy, Housing and Urban Development (HUD), Justice, Labor, and Transportation (DOT). Across the United States, as of June 19, 2009, Treasury had outlayed about $29 billion of the estimated $49 billion in Recovery Act funds projected for use in states and localities in fiscal year 2009. More than 90 percent of the $29 billion in federal outlays has been provided through the increased Medicaid Federal Medical Assistance Percentage (FMAP) and the State Fiscal Stabilization Fund (SFSF) administered by the Department of Education. GAO's work focused on nine federal programs that are estimated to account for approximately 87 percent of federal Recovery Act outlays in fiscal year 2009 for programs administered by states and localities. Increased Medicaid FMAP Funding All 16 states and the District have drawn down increased Medicaid FMAP grant awards of just over $15 billion for October 1, 2008, through June 29, 2009, which amounted to almost 86 percent of funds available. Medicaid enrollment increased for most of the selected states and the District, and several states noted that the increased FMAP funds were critical in their efforts to maintain coverage at current levels. States and the District reported they are planning to use the increased federal funds to cover their increased Medicaid caseload and to maintain current benefits and eligibility levels. Due to the increased federal share of Medicaid funding, most state officials also said they would use freed-up state funds to help cope with fiscal stresses. Highway Infrastructure Investment As of June 25, DOT had obligated about $9.2 billion for almost 2,600 highway infrastructure and other eligible projects in the 16 states and the District and had reimbursed about $96.4 million. Across the nation, almost half of the obligations have been for pavement improvement projects because they did not require extensive environmental clearances, were quick to design, obligate and bid on, could employ people quickly, and could be completed within 3 years. State Fiscal Stabilization Fund As of June 30, 2009, of the 16 states and the District, only Texas had not submitted an SFSF application. Pennsylvania recently submitted an application but had not yet received funding. The remaining 14 states and the District had been awarded a total of about $17 billion in initial funding from Education--of which about $4.3 billion has been drawn down. School districts said that they would use SFSF funds to maintain current levels of education funding, particularly for retaining staff and current education programs. They also said that SFSF funds would help offset state budget cuts. Accountability States have implemented various internal control programs; however, federal Single Audit guidance and reporting does not fully address Recovery Act risk. The Single Audit reporting deadline is too late to provide audit results in time for the audited entity to take action on deficiencies noted in Recovery Act programs. Moreover, current guidance does not achieve the level of accountability needed to effectively respond to Recovery Act risks. Finally, state auditors need additional flexibility and funding to undertake the added Single Audit responsibilities under the Recovery Act. Impact Direct recipients of Recovery Act funds, including states and localities, are expected to report quarterly on a number of measures, including the use of funds and estimates of the number of jobs created and the number of jobs retained. The first of these reports is due in October 2009. OMB--in consultation with a broad range of stakeholders--issued additional implementing guidance for recipient reporting on June 22, 2009, that clarifies some requirements and establishes a central reporting framework.
|
Public transportation provides many groups with wide-ranging benefits. Those served include transportation disadvantaged populations, such as “low-income individuals”. Congress created the JARC program in the 1998 Transportation Equity Act for the 21st Century (TEA-21) to support national welfare-reform goals, including helping adults meet new work requirements to receive federal assistance. A purpose of the program was to improve low-income individuals’ ability to access jobs and job- related needs by providing grants to states and localities for the provision of additional or expanded transportation services. Under TEA-21, JARC was a discretionary program, with projects selected by FTA for funding through a competitive process or congressionally designated for funding. Figure 1 provides a timeline of subsequent surface transportation authorizations that affected the program. In 2005, the Safe, Accountable, Flexible, Efficient Transportation Equity Act —A Legacy for Users (SAFETEA-LU) changed JARC into a formula program. FTA apportioned dedicated program funds to states for projects in small urbanized and rural areas, and to large urbanized areas, based on a statutory formula. State transportation agencies were required to be JARC designated recipients for small urbanized and rural areas. Designated recipients for large urbanized areas include major transit agencies and metropolitan planning organizations. SAFETEA-LU also required that designated recipients develop and conduct a competitive selection process for their projects, and competitively allocate funds to subrecipients. After projects were selected, designated recipients had to apply to FTA to fund the projects. FTA awarded grants to designated recipients and funds were obligated at the time of award. Figure 2 below provides an overview of the JARC grants process under SAFETEA-LU. Program-eligible activities included, among others: late-night and weekend transit service, expansion of fixed-route public transit routes (e.g., bus routes), ride-sharing activities, local car loan programs, or capital expenditures such as vehicle purchases to support a transit service. In fiscal year 2012, the last year before the program was consolidated, FTA apportioned approximately $176.5 million to urbanized and rural areas. Funds were available for obligation for 3 years from the time of apportionment. JARC funds remain subject to the program requirements that were in place at the time the funds were apportioned. In 2012, the JARC program was one of several FTA grant programs that MAP-21 changed or did not renew. Specifically, the statute consolidated eligible JARC activities into the existing Urbanized Area Formula Program (urban transit program) and Formula Grants for Rural Areas Program (rural transit program). Eligible activities of the consolidated urban and rural transit programs are as follows. Urban transit program: In addition to providing funds for eligible JARC activities in urbanized areas, this program provides funds for activities including: (1) planning, certain operating costs, engineering, design, and evaluation of transit projects and other technical transportation-related studies; (2) capital investments in bus and bus- related activities; and (3) capital investments in new and existing fixed-guideway systems including, but not limited to, rolling stock, overhauling and rebuilding of vehicles, and communications. Rural transit program: In addition to providing funds for eligible JARC activities in rural areas, this program provides funds for activities including: (1) planning; (2) capital expenses for the acquisition of buses and vans or other paratransit vehicles; (3) preventive maintenance, vehicle rehabilitation, remanufacture, or overhaul; and (4) operating expenses directly related to system operations We have previously noted that consolidation was seen as a means of continuing the goals of the program while offering greater efficiency and flexibility, including decreasing administrative burdens, to recipients. FTA communicated changes that MAP-21 made to public transportation programs, including the JARC program, to designated recipients through program guidance and outreach. Initially, FTA issued a Federal Register notice regarding FTA transit program changes in October 2012. The notice provided interim guidance and emphasized that under MAP-21, activities that were funded through the former JARC program are eligible under the urban transit program and the rural transit program. The notice also stated that there is no set-aside or spending cap on JARC activities. After the 2012 consolidation, DOT applied all of the JARC eligibility requirements and existing activities in the urban transit program and rural transit program. For example, FTA officials explained that they incorporated a unique aspect of grantee eligibility under the JARC program—the eligibility of private nonprofit organizations as sub- recipients—in the urban transit program. FTA issued circulars with the final MAP-21 program guidance in 2014. The circulars reiterated that there is no limit to the amount of allocated and unobligated formula funds that can be used for JARC activities until Congress rescinds or redirects the funds to other programs, and outlined the eligibility requirements for these activities. As part of its outreach to funding recipients, FTA held webinars highlighting new and consolidated programs under MAP-21, and the JARC program was among the programs covered. Additionally, FTA officials stated that headquarters and regional FTA officials made themselves available through conferences, or other means such as telephone calls or e-mail, to answer questions or concerns regarding the consolidation and its potential impacts on JARC activities. Selected recipients we spoke with said they found out about changes to the JARC program through FTA webinars or designated JARC recipients. Of the 19 selected designated recipients we interviewed, a majority (12 of 19) said they found out about the JARC consolidation primarily through webinars and FTA’s summer 2012 conference. Some also reported they viewed their contact with FTA officials as a useful resource as they navigated the program changes. For example, an official with the Texas Department of Transportation said he found FTA staff to be the most important resource during program changes, because the staff was available for questions and feedback. Furthermore, this official told us that his agency had meetings twice a year with subrecipients, and FTA staff joined those meetings to assist in answering a range of questions or concerns, including those on JARC activities, from subrecipients. Of the 15 selected subrecipients we interviewed, about half (7) reported finding out about the JARC program changes through their state’s transportation agency or other designated recipient. For example, an official with a Mississippi subrecipient said it found out about the JARC consolidation and its implications at the annual workshop held by the Mississippi Department of Transportation to assist agencies with grant applications. In Arizona, an official with a selected subrecipient said he learned about the JARC program changes through its metropolitan planning organization’s committee meeting. Of the 34 total selected recipients we interviewed, 22 stated they found the guidance provided to them, either from FTA directly or through their respective designated recipients, to be sufficient and effective in communicating program changes. For example, an official from a designated recipient in Mississippi who characterized FTA’s communications as effective noted that the information was readily available, but that it was primarily up to recipients to read the guidance and follow up with FTA on any questions or concerns. An official from a selected subrecipient in Virginia said that the state’s communication regarding the MAP-21 program changes was good and that FTA’s written resources also provided useful details on the changes. Almost all (30 of 34) selected recipients reported that they have continued providing JARC activities, such as transportation services, that were started prior to the 2012 MAP-21 consolidation. About two-thirds (22) of the selected recipients we interviewed have maintained all such activities. Officials from half of the selected recipients (17) reported that they still are expending dedicated JARC program funds received prior to the 2012 statutory changes to support their ongoing JARC activities, and that they plan to continue using those funds until they are depleted. Further, all selected recipients reported that they have had to address, or anticipate having to address, service or funding challenges in order to continue their JARC activities. Despite reported challenges, all selected recipients except one reported they are working to maintain their JARC activities, including service hour expansions and commuter bus routes. Some selected recipients reported that it is too soon to tell whether they will need to make additional adjustments to their JARC activities in the future because of possible year-to-year funding challenges. Of the 34 selected recipients we interviewed, 30 had continued JARC activities, such as ongoing transit service, started prior to the MAP-21 changes. These selected recipients reported that they have maintained some level of JARC activities using a number of funding sources, including urban or rural transit program funds and remaining JARC program funds. Specifically, about two-thirds of selected recipients (22 of 34) reported continuing all of their previous JARC activities to date, and just under one-quarter (8 of 34) said they continue to provide downsized or redesigned JARC activities. The remaining 4 selected recipients had used their JARC program funds to complete short-term activities, such as capital expenditures for vehicles or pilot transit programs, which were not affected by the MAP-21 changes. Through analysis of our interviews with selected recipients, we identified a number of reasons recipients have been able to continue JARC activities after the MAP-21 program changes. These reasons include the following: Remaining dedicated JARC funds: Seventeen of the 34 recipients reported that they have not yet expended all of their dedicated funds and continue to use them to fund JARC activities. Officials from 5 selected recipients reported that when MAP-21 was implemented in 2012, they started planning their expenditure of remaining dedicated funds to sustain existing JARC activities into future years. For example, officials at Texas Department of Transportation, a selected designated recipient, reported redistributing dedicated funds that their subrecipients had not expended in a timely manner to other eligible subrecipients’ ongoing JARC activities. Those funds were fully expended in August 2016. Additionally, in Oregon, officials from Tri- County Metropolitan Transportation District of Oregon (TriMet), a selected designated recipient, said they distributed remaining dedicated JARC funds to subrecipients, which used this funding in combination with local funds to maintain their transit services connecting residential neighborhoods and employment centers to longer or more heavily used transit routes. The officials said they expect to close out their final dedicated program grant in June 2017. Overall federal transit formula funding amounts were unchanged: None of the 19 selected designated recipients we interviewed reported a significant overall change to their annual FTA formula funding allocations attributable to the statutory changes. In addition, five of those designated recipients noted that the changes led to overall benefits, including reduced administrative burden and increased spending flexibility of federal funds. For example, officials from Maricopa Association of Governments, a selected designated recipient in Arizona, explained that they were able to re-evaluate ongoing service and capital needs and reallocate some unused funds (in this case, urban transit program funds that they had initially chosen to set aside for JARC activities) to undertake bus stop capital upgrades. According to the officials, the reallocation had no impact on their subrecipients’ JARC activities, and the added funding flexibility was beneficial to the region’s transit system as a whole. JARC activities integrated into existing transit services: Half of selected recipients (17 of 34) reported that their JARC program funded activities fulfilled a variety of citizens’ transportation needs, encouraging the providers to continue those JARC activities using alternate funding sources after the 2012 statutory changes. For example, officials with Ride Connection, a selected Oregon subrecipient, stated that their ongoing JARC transit service has become an important transportation option serving the broader community with trips to schools and grocery stores, in addition to job centers. A dedicated program grant currently funds this service, but officials we spoke with said they are actively seeking alternate funds to maintain it. Additionally, officials with Blue Water Area Transit, a selected Michigan subrecipient, told us that late-night bus service initially funded under the JARC program has become popular with a wider population, helping increase ridership on their other transit services. Due to the ridership increase, officials reported that the agency is able to cover a greater portion of operating expenses using revenue from fares, providing an overall benefit to the system. Four of the recipients we selected provide transit service in university communities. Those transit officials said that students, faculty, and other campus employees make up a large portion of transit riders in their jurisdictions. This customer base creates demand for expanded transit service hours, like extended weekday hours of operation and weekend service, which JARC program grants had previously funded. These selected recipients all reported that they now receive dedicated local transit funding from the university or surrounding jurisdiction to maintain these services. Importance of JARC activities to the community: Officials from just over one-third of selected recipients we interviewed (12 of 34) stated that transportation for low-income workers is a high priority in their communities, and they are willing to seek out additional funds in order to sustain JARC activities. For example, TriMet reported receiving political support from its regional business community for transit access to low-wage employment centers, particularly during off-peak hours such as evenings and weekends when transit service can be limited. TriMet officials explained that their transit system attracts relatively high interest from employers because their main local funding source is a payroll tax. Additionally, officials with Flint Mass Transportation Authority, a selected Michigan designated recipient, reported that limited job availability in their metropolitan area creates a high demand for JARC commuter bus routes that service employment centers in other jurisdictions. According to the officials, the funding needs of their commuter service limit the amount of funding available for other needs such as capital purchases. Officials from the eight selected recipients said they continue to provide downsized or redesigned JARC activities, service modifications or other changes that have enabled them to maintain a level of JARC activities to help meet low-income riders’ needs. Reported adjustments fell into three categories: Realigning service and funding: Two selected recipients reported redesigning their JARC activities. Lower Rio Grande Valley Development Council, a selected designated recipient in Texas, reported realigning one 50-mile long JARC funded route into three separate routes to more efficiently meet rider needs. These bus routes serve affordable housing projects in rural areas—locations that, according to officials, had limited transit options before the service began—and job centers in urban areas such as industrial parks and universities. Additionally, officials with Suburban Mobility Authority for Regional Transportation, a selected Michigan designated recipient, reported that one of their subrecipients, a nonprofit transit provider, adjusted service on a JARC funded demand response bus service to better align it with community needs and funding sources. The officials said the service primarily served elderly and disabled riders; after fully expending dedicated JARC funds, the officials have used funds from FTA’s Enhanced Mobility of Seniors and Persons with Disabilities program to maintain the service. However, they also said that as a result of this funding change, the service’s target ridership now focuses exclusively on seniors and persons with disabilities in order to comply with program requirements. Adjusting service hours or frequencies: Two selected recipients informed us that they were unable to sustain all activities started with JARC program grants, but have adjusted service hours and frequencies to allow them to continue some such activities after the program changes. Officials with the Oregon Department of Transportation, a selected designated recipient, reported that a number of their subrecipients that provide transit service used dedicated JARC program grants to fund enhancements such as expanded service hours, more extensive service areas, and increased bus frequencies. After the statutory changes, some of Oregon Department of Transportation’s subrecipients had to partially cut back their transit service enhancements, but aspects of the enhancements remain. More specifically, officials said that one of their subrecipients maintained 2 of the 4 hours of enhanced bus service, but eliminated the 8 p.m. to 10 p.m. window after the JARC funds ran out. Blacksburg Transit, a selected Virginia subrecipient, reported reducing service on a JARC commuter bus route from two round trips per day to one daily round trip, which it said allowed them to fulfill a community need while reducing the cost of the service. Discontinuing select low-ridership JARC activities: Officials with 4 selected recipients explained that they discontinued some JARC activities with limited ridership. According to these recipients, they have found ways to provide alternate services for transportation- disadvantaged individuals, including those commuting to low-wage jobs. For example, officials with Detroit Department of Transportation, a selected designated recipient in Michigan, reported discontinuing a demand response service after their dedicated JARC funding was expended. The officials explained that they used city resources to assist individuals that used the demand response service in finding more transit-accessible workplaces, and utilized federal and local funds to enhance existing fixed route bus service. Additionally, officials with Brownsville Metro, a selected subrecipient transit agency in Texas, reported that they discontinued one of their JARC bus routes when a local university that directly benefited from the service could no longer provide the required matching funds to maintain the route. However, a different JARC route funded through a partnership with a neighboring county continues operation. Selected recipients reported facing service and funding challenges in continuing JARC activities after the statutory changes in 2012. FTA officials explained that low-income transit users may live or work in areas that are not adjacent to heavily-serviced transit areas, a situation that can make JARC activities more expensive to provide than other transit service. A majority of our selected recipients (25 of 34) reported service challenges due to factors such as location of housing or employment characteristics of low income individuals (e.g., working late-night hours or weekends). Additionally, officials with almost all selected recipients (33 of 34) reported facing challenges pertaining to funding. For example, many noted that they have had difficulty competing for funds against other public services. Table 1 below highlights some of the service and funding challenges our selected recipients identified, as well as examples of methods some selected recipients used to address these challenges. A substantial amount of dedicated JARC funds apportioned in 2012 and earlier under SAFETEA-LU remains unspent. According to FTA data, as of February 2017, there are approximately 265 federally funded JARC program grants accounting for about $147.9 million still open with funds to expend. As noted earlier, half of our selected recipients reported that they have not yet expended all of their dedicated federal JARC funds. For those with dedicated funding still available, they may not yet have faced the decision of whether to continue to fund—or at what level to fund— eligible JARC activities under the urban or rural transit program, whether to fund other eligible activities under those programs, or whether to seek additional nonfederal funds to continue their eligible JARC activities. Officials with nine of the selected recipients with dedicated funds remaining said they have not yet finalized plans for continuing their current JARC activities after the funds are expended and that they were not sure if the activities would be maintained as-is or be adjusted. For example, Sun Metro Mass Transit Department, a selected designated recipient in Texas, reported that it plans to re-evaluate eligibility requirements for its JARC activities and analyze its regional transit priorities before making a final future funding decision. Also, officials with RADAR Transit, a selected Virginia subrecipient, reported that they are working to educate regional government officials about the benefits of the current JARC activities in advance of dedicated funds being expended, but officials are not sure whether they will be able to obtain local funds to replace their expended monies. However, the remaining eight of the 17 selected recipients with dedicated funds remaining have developed plans to continue their JARC activities after spending all of the funds. Officials from 7 of the 17 selected recipients that have already expended all of their dedicated funds stated that they are uncertain whether they will be able to obtain long-term funding to continue to support JARC activities into future years. For example, a selected subrecipient transit agency in Michigan reported that to date the state has provided funding in the annual budget to sustain its JARC activities. However, that funding is not guaranteed to continue beyond year to year. Officials with a selected Mississippi subrecipient nonprofit transportation provider explained that the funding amounts they receive from local jurisdictions for their rural JARC transit service can be inconsistent, making it challenging for them to cover the local portion of their operating expenditures. Also, officials with Charlottesville Area Transit, a selected Virginia subrecipient, explained that the level of local and state funding, which they currently use to help sustain expanded transit service hours started with a JARC program grant, could change in the future. According to the officials, at any time their jurisdiction could redirect municipal funds to non-transit activities such as the city’s fire or police departments. Furthermore, these officials reported that Virginia’s state gas tax, which provides transit operating funds, is scheduled to sunset in 2019 and might not be renewed. However, all seven selected recipients that have expended their dedicated funds and face such uncertainty indicated that they are seeking sustained, long-term funding sources to help them continue JARC activities in the future. We provided a draft of this report to DOT for comment. DOT had no comments on this report. We are sending copies of this report to the appropriate congressional committees, the Administrator of FTA, the Secretary of the Department of Transportation, and interested congressional committees. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found at the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the individual named above, the following individuals made important contributions to this report: Heather MacLeod (Assistant Director), Betsey Ward-Jenks (Analyst-in-Charge), Dwayne Curry, Andrew Furillo, Delwen Jones, Elizabeth Wood, Rachel Frisk, Cheryl Peterson, and Max Sawicky.
|
Established in 1998, the JARC program, which was administered by FTA, provided grants to states and localities for improving the mobility of low-income individuals to and from jobs and employment-related activities. In 2005, statute changed JARC into a formula program with dedicated funds apportioned by FTA. Then in 2012, MAP-21 repealed and consolidated the JARC program. However, activities previously funded through the JARC program are still eligible for funding through other programs administered by FTA. The Fixing America's Surface Transportation Act included a provision for GAO to examine the impact of changes that MAP-21 had on public transportation. This report examines: (1) how FTA communicated the 2012 statutory changes to JARC activities to transit providers and (2) whether and how selected states and transit providers have continued to fund and provide JARC activities since the 2012 statutory changes. GAO reviewed FTA program guidance. GAO interviewed FTA officials and 34 selected recipients of JARC funding, including transportation officials and transit providers, from six states (Arizona, Michigan, Mississippi, Oregon, Texas, and Virginia). GAO judgmentally selected these JARC recipients for interviews to represent diverse geographic locations, including large-urban, small-urban, and rural funding areas within each state, among other factors. GAO provided a draft of this report to DOT for comment. DOT had no comment on this report. The Federal Transit Administration (FTA) communicated changes that Moving Ahead for Progress in the 21st Century Act (MAP-21) made to public transportation programs, including the Job Access and Reverse Commute (JARC) program, to designated recipients through program guidance and outreach. For example, in an October 2012 Federal Register notice, FTA addressed the consolidation of the JARC program and its eligible activities (JARC activities) into the existing Urbanized Area Formula Program (urban transit program) and the Formula Grants for Rural Areas Program (rural transit program). Additionally, in 2014, FTA issued final guidance on JARC activities allowed under the urban and rural transit programs. In addition to written guidance, FTA officials offered webinars and made staff available via in-person meetings, calls, and e-mail messages to answer any questions or concerns about the 2012 program change. Of the 34 selected JARC recipients (selected recipients) GAO interviewed, most (22) stated that they found the guidance provided to them, either by FTA directly or by other means, to be sufficient and effective in communicating program changes. Most of the selected recipients we interviewed (30 of 34) said that they have continued to provide some level of JARC activities after the 2012 statutory changes. Selected recipients cited several reasons why they continue—or plan to continue—providing JARC activities, including: recipients still have remaining dedicated JARC funds, overall federal transit formula amounts were unchanged, and JARC activities have already been integrated into existing transit services. The 34 selected recipients GAO interviewed also identified an array of challenges in continuing their JARC activities. All but one of the selected recipients specified funding challenges related to MAP-21 changes. In particular, 21 selected recipients reported JARC activities had difficulty competing against other public services, including transit services, for funds, and 17 reported year-to-year funding allocations for continued JARC activities could change. In addition, 25 selected recipients specified service challenges linked to the location of low-income housing and employment centers. GAO found that it is too soon to determine the full impact of the statutory changes on JARC activities. This finding is in part due to the number of grant recipients and subrecipients still spending dedicated funds apportioned before the 2012 statutory changes. Half of the 34 selected recipients GAO interviewed reported that they were still spending dedicated JARC funds, and recent FTA data indicate that as of February 2017, there were approximately 265 active program grants funding JARC activities. Officials from 7 of the 17 selected recipients that have already expended all of their JARC program funding stated that they are uncertain whether or not they will be able to obtain long-term funding to continue to support JARC activities into future years.
|
AOC is responsible for the maintenance, renovation, and new construction of the buildings and grounds primarily located within the Capitol Hill complex. Organizationally, AOC consists of a centralized staff that performs administrative functions and separate “jurisdictions” responsible for the day-to-day operations at the U.S. Capitol Building, the Senate Office Buildings, the House Office Buildings, the Library of Congress Buildings and Grounds, the Supreme Court Buildings and Grounds, the Capitol Grounds, the Capitol Power Plant, and the Botanic Garden. The historic nature and high profile of many of these buildings creates a complex environment in which to carry out AOC’s mission. AOC must perform its duties in an environment that requires balancing the divergent needs of congressional leadership, committees, individual members of Congress, congressional staffs, and the visiting public. The challenges of operating in this environment were complicated by the events of September 11, 2001, and the resulting need for increased security. We issued a report in January 2003 that contained 35 recommendations to assist AOC in establishing a strategic management and accountability framework, including strong management infrastructure and controls, to drive its agency transformation effort, and to address long-standing program issues. As a part of ongoing work to monitor AOC’s efforts to implement the recommendations, we issued a report in January 2004 that covered the agency’s progress from January 18, 2003, through November 30, 2003. In our January 2004 report, we reiterated that many of AOC’s management problems are long-standing and that organizational transformation would take time to accomplish. Not surprisingly, AOC’s efforts to address the 35 recommendations were, at that time, very much a work in progress. We highlighted the agency’s first steps to develop a management and accountability framework, including the issuance of a draft strategic plan and efforts to strengthen individual accountability for goals. We noted, however, that additional steps were needed to enhance communications with congressional and other stakeholders and employees. In addition, we found that AOC began to draft new human capital policies and procedures and developed three broad-based action plans to help institutionalize financial management best practices, although many of the actions were not scheduled for completion until 2007. We also reported that AOC was making progress developing an agencywide approach to information technology (IT) management, although additional steps were needed to ensure that, for example, mature information security management, investment management, and enterprise architecture (EA) management processes are implemented. In addition, we found that AOC was beginning to address worker safety concerns by developing a hazard assessment and control policy, although it is not expected to be fully implemented until May 2006; was taking steps to establish a project prioritization framework for better management and accountability; and was progressing toward adopting a more strategic approach to recycling. Overall, we concluded that AOC fulfilled three recommendations, and we made four additional recommendations—which brought to 36 the number of open recommendations. This report is part of our effort under a congressional mandate to monitor AOC’s progress to establish a strategic management and accountability framework, improve its management infrastructure and internal control, and address long-standing areas of concern. Our first objective was to assess AOC’s progress over the 6-month period from December 1, 2003, through May 31, 2004, on eight key issues that deserve near-term attention and focus: (1) stakeholder involvement, (2) employee communications, (3) auditable financial statements and related internal controls, (4) financial reporting for operating units and cost accounting, (5) information security management, (6) worker safety performance measures, (7) Capitol complex master planning, and (8) strategic management of recycling. Our second objective, which was mandated by the Consolidated Appropriations Resolution, 2003 (Public Law 108-7), was to assess AOC’s COO action plan that was issued in December 2003. To address our first objective, we collected documentation to determine AOC’s progress on addressing each key issue and the 16 corresponding prior recommendations. For example, we reviewed documents such as AOC’s 5-year strategic and annual performance plans; strategic communications plan; process manuals; funds control administration order; policy for inventory management; security risk management policy; information security training, education, and awareness policy; IT security metrics policy; inspection, audit, and evaluation policy; and updated occupational safety and health program plan. We also interviewed AOC officials responsible for implementing the 16 recommendations and other related improvement efforts under way at AOC. To address our second objective, we reviewed and analyzed several documents, including (1) the requirements of the Deputy Architect/Chief Operating Officer Action Plan described in the legislation; (2) the Report to the Congress from the Deputy Architect/Chief Operating Officer, dated December 22, 2003; (3) the Deputy Architect/Chief Operating Officer Action Plan, dated December 22, 2003; and (4) the Architect of the Capitol Strategic Plan, dated December 15, 2003. We conducted our work in Washington, D.C., from April 2004 through July 2004 in accordance with generally accepted government auditing standards. AOC must perform its duties in an environment that requires balancing the divergent needs of congressional leadership, committees, individual members of Congress, congressional staffs, and the visiting public. The challenges of operating in this environment were complicated by the events of September 11, 2001, and the resulting need for increased security. As we stated in our January 2003 and January 2004 reports, it is critical for AOC to engage Congress and other stakeholders to ensure that its strategic and other planning efforts fully consider the interests and expectations of these stakeholders. Successful stakeholder involvement requires continuously engaging Congress and other stakeholders to obtain their input and feedback for organizational or operational changes at AOC, ensuring that AOC’s finite resources are efficiently targeted at the highest project priorities, and fostering a basic understanding with Congress and other stakeholders of how to balance competing demands. Successful stakeholder involvement also includes improving AOC’s accountability reporting, which helps communicate what AOC has accomplished and its plan for continued progress. In our January 2003 report, we made three recommendations that would help AOC improve its relations with Congress and other stakeholders: Improve strategic planning and organizational alignment by involving key congressional and other external stakeholders in AOC’s strategic planning efforts and in any organizational changes that may result from these efforts. Develop a comprehensive strategy to improve internal and external communications by improving annual accountability reporting through annual performance planning and reporting. Develop a comprehensive strategy to improve internal and external communications by completing the development of congressional protocols by involving stakeholders. As noted in our January 2004 report, AOC improved its strategic planning process and provided more specificity to its strategic goals and objectives, as well as developed milestone dates and actions to assist AOC in monitoring its progress. AOC also made progress in improving annual accountability reporting by implementing a strategic management framework, which includes issuing a strategic plan every 2 years, developing an annual performance plan, and developing an annual performance report that discusses how AOC is progressing on meeting its goals, as well as holding midyear status briefings. An AOC official stated the agency recognizes that the next step in the strategic planning process is to more fully incorporate outcome- and performance-based measures into the agency’s strategic and performance plans, and AOC has recently developed a statement of work to seek assistance in developing both quantitative and qualitative performance measures to demonstrate progress toward its strategic goals and objectives. In addition, we noted that AOC was partially addressing the development of congressional protocols. To further assist these efforts, we made an additional recommendation for AOC to conduct a pilot of its congressional protocols in one or more of its jurisdictions to determine how well its protocols would work in addressing customer requests for service while balancing the needs of multiple requests with the requirements of the strategic plan and corresponding project priorities of the agency. During the 6-month review period, AOC did not fully engage its congressional and other stakeholders. For example, AOC has not reached agreement with Congress on how best to develop a clear, transparent, and documented understanding of how AOC sets project priorities and how progress will be assessed. In addition, as detailed in other sections of this report, AOC needs to expand Congress’ involvement in the development of the Capitol complex master plan and the mission and goals for the agency’s recycling program. AOC did take some steps to inform these stakeholders by delivering planning documents to Congress, responding to requests for information, and attending monthly or biweekly meetings with congressional stakeholders. AOC officials stated that they delivered AOC’s final strategic plan, performance plan, and COO action plan to congressional stakeholders in December 2003 to inform them of planned activities. (See our assessment of the COO action plan below.) AOC also generated a detailed report, at the request of its congressional stakeholders, that outlined AOC’s progress in implementing our recommendations. Furthermore, AOC did take some steps to communicate with congressional and other stakeholders by attending monthly meetings held by House leadership; biweekly meetings with other House stakeholders, such as the House Clerk and Sergeant At Arms; monthly meetings with the Senate Rules Committee; and bimonthly meetings with the Senate Sergeant At Arms. AOC is also preparing to issue its third building services customer satisfaction survey in June 2004 to measure the agency’s performance. However, it has yet to release the results of the 2003 survey, which was originally scheduled for December 2003, because it has yet to be approved by several AOC jurisdictions. As we reported in January 2003, AOC drafted an initial set of congressional protocols to help AOC work with its congressional customers using clearly defined, consistently applied, and transparent policies and procedures. Since then, AOC completed draft congressional protocols and submitted them to key congressional stakeholders. An AOC official stated that, based on stakeholder feedback, the protocols are not viable because congressional stakeholders do not believe they are applicable to the operation of and services conducted by AOC. Based on this feedback, an AOC official said that the agency may not pursue implementing the congressional protocols. As a legislative branch agency, we have found congressional protocols to be a successful vehicle in helping us work with Congress. Successful development and implementation of these protocols first required us to reach out to and solicit feedback from congressional and other stakeholders to develop an understanding of congressional customer needs and identify concerns before we developed an initial draft. Further, we continuously engaged our congressional and other stakeholders throughout pilot testing and implementation. Thus, if AOC decides not to further pursue the congressional protocols or some other vehicle, neither AOC nor congressional stakeholders can be assured that agency efforts and resources are targeted at their highest project priorities and transparency exists for strategic and tactical decisions and trade-offs. AOC has taken some steps to involve congressional and other stakeholders; however, it has yet to fully engage these stakeholders. AOC can strengthen its stakeholder relationships by informing congressional and other stakeholders of AOC’s progress and activities, as well as more effectively consulting with these stakeholders to build a mutual understanding of each other’s priorities. To strengthen the relationship between AOC and its congressional and other stakeholders, we recommend the Architect of the Capitol direct the Chief Operating Officer to actively consult with Congress on the design and implementation of meaningful outcome- and performance-based measures that are useful to both AOC and Congress and thereby enable AOC and Congress to assess AOC’s progress; expedite the release of the 2003 building services customer satisfaction survey, as a transparency and accountability mechanism and to provide Congress and other stakeholders assurance that actions are being taken in response to their feedback; and work with Congress on the design and implementation of a transparent process to facilitate an understanding between AOC and its congressional stakeholders about how AOC targets its efforts and resources at the highest project priorities and how strategic and tactical decisions and trade-offs are made. Strong internal communication with employees is vital to any organizational transformation by helping employees understand their contribution to overall agency goals and facilitate feedback that helps an organization develop strategies for improvement. Strong employee communications would also help AOC address its history of employee relations problems and complaints. Effective communication efforts include receiving employee input, which can be obtained from existing offices that interact directly with employees, or via other methods, including employee surveys or focus groups. Regardless of the source, systematically collecting and analyzing employee data are important for identifying agencywide issues affecting employee relations and improving human capital policies and procedures. Another useful practice for dealing with issues affecting employee relations and collecting employee data is to establish an ombudsperson position. In our January 2003 report, we made three recommendations to improve AOC’s ability to communicate with employees: Develop a comprehensive strategy to improve internal and external communications by providing opportunities for routine employee input and feedback. Strengthen AOC’s human capital policies, procedures, and processes by assessing ways in which AOC management could better gather and analyze data from the various employee relations offices and employee advisory council while maintaining employee confidentiality. Establish a direct reporting relationship between the ombudsperson and the Architect consistent with professional standards. In our January 2004 report, we noted that AOC was partially addressing the development of a comprehensive strategy to improve internal and external communications. As such, we made an additional recommendation for AOC to gather and analyze employee feedback from focus groups or surveys before fiscal year 2005, as well as communicate how it is taking actions to address any identified employee concerns. Over the 6-month period reviewed, AOC continued to make progress addressing employee communications by obtaining employee input and providing employees with feedback, as well as assessing the data gathered during these efforts. To help implement these steps, AOC management issued a communications plan, a draft employee feedback manual, a customer satisfaction survey manual, and a focus group guide. Specifically, AOC’s Employee Feedback Process Manual stated that it is AOC’s policy “to periodically and systematically obtain and report employee feedback, to assess satisfaction levels on activity-related topics, and to use this feedback in improvement actions.” To obtain employee input and provide employee feedback, AOC developed a guide that outlines procedures for conducting focus groups and discusses AOC’s intent to contract out the facilitation of focus groups. AOC management also prepared a statement of work that provides requirements a contractor must follow when conducting focus groups, including a requirement for providing a written report containing general conclusions and findings. AOC officials stated that they awarded the contract on June 28, 2003, to facilitate AOC’s focus groups, so they may begin no later than August 1, 2004. AOC also obtains employee input by requiring key managers to document and track employees’ concerns or issues each month. Key AOC officials from AOC’s Employee Assistance Programs, Labor Relations, Employment Counsel, Equal Employment Opportunity, Employee Advisory Council (EAC), senior management, and Human Resources Management Division (HRMD) meet monthly to assess employee concerns and identify systematic concerns. Agency officials stated that at each of the meetings, all attendees are reminded that employee confidentiality must be maintained. In May 2004, the attendees agreed to pursue a number of actions in response to employee concerns discussed at the monthly meetings. Furthermore, AOC has included as an objective in its Human Capital Plan to solicit feedback from its EAC. AOC also plans to use exit interviews to collect employees’ views upon leaving AOC as well as regularly obtain input from employees through its customer satisfaction surveys, to gauge AOC’s internal customers’, including employees’, perceptions of AOC’s performance. To provide feedback to its employees, AOC continues to publish its weekly and quarterly agencywide newsletters (AOC This Week and Shop Talk) and issues HR Bulletins that provide updates on changes in human resource practices and highlight information regarding issues such as employee benefits. AOC also published a pamphlet that summarizes the agency’s strategic plan. According to agency officials, every AOC employee has received a copy of the pamphlet, while “town hall” briefings have been held with over 400 employees to further discuss the strategic plan. Regarding the use of an ombudsperson, AOC officials agreed with the recommendation in our January 2004 report to adhere to the standard of independence for the office of an ombudsperson. Agency officials also noted that direct meetings between the Ombudsperson and the Architect were planned, but they never took place because the Ombudsperson’s contract expired on September 30, 2003, and the position has not been filled. AOC does not have plans to fill the position of ombudsperson. According to AOC officials, the ombudsperson was to serve as an independent provider of advice and counsel to nonunion employees on employment policies, practices, and other employment-related matters. The ombudsperson’s duties and responsibilities focused on resolving employee issues that may have not been resolved by other offices, according to AOC officials. AOC has made progress in addressing employee communications issues by creating the basic framework in its communications plan and process manuals by which AOC will regularly obtain employee input, systematically analyze those data, and provide feedback to employees on improvement actions that results from those efforts. For instance, AOC has committed itself to initiating employee focus groups not later than August 1, 2004. AOC also needs to issue and implement its draft Employee Feedback Manual. AOC can maintain momentum in this area by fully implementing the policies and procedures that it drafted or issued as part of its employee communication efforts. As we noted in our October 2003 report on the Government Printing Office’s transformation, developing a comprehensive communications strategy that reaches out to employees and seeks to engage them in a two-way exchange helps to build trust and cultivate stronger working relationships. Most important, AOC’s top leadership must make visible and timely adjustments as appropriate in policy or procedures in response to employee concerns. AOC’s decision to not fill the vacant ombudsperson position raises questions as to whether the services previously provided by the ombudsperson are being met by other offices or are no longer needed. Given the history of employee relations issues at AOC, a critical component of such a management decision would be to conduct a thorough analysis of the agency’s and employees’ needs, as well as an assessment of the capacity of existing offices, both internal and external to AOC, to fulfill those needs. Without a thorough analysis AOC cannot be assured that the need for an ombudsperson no longer exists, or that AOC units are prepared to fulfill the responsibilities an ombudsperson would have performed. To improve communications with employees, we recommend that the Architect of the Capitol direct the Chief Operating Officer to fully and effectively implement the basic framework as defined in its communications plan and process manuals, and finalize its draft employee feedback manual to assure that the current progress already made is maintained and conduct an analysis of both AOC management and employee needs with respect to resolving employee concerns and issues, as well as assessing the capacity of existing offices to fulfill those needs. Preparing auditable financial statements and establishing related internal controls are fundamental components of a foundation of control and accountability. In our January 2003 and January 2004 reports, we discussed the value of a system of checks and balances over assets and financial reporting. These steps are a key foundation for implementing our January 2003 recommendation that AOC continue to improve its approach to financial management by institutionalizing practices that will support budgeting, financial, and program management. AOC’s goals of preparing auditable financial statements and establishing effective internal controls are significant components of AOC’s plan to build a foundation of financial control and accountability—one of the three broad-based action plans established by AOC’s Office of Chief Financial Officer to respond to our recommendation. In preparing for an audit of its September 30, 2003, balance sheet, AOC prepared a full set of agencywide financial statements for fiscal year 2003 and has, in fiscal year 2004, begun preparing quarterly agencywide financial statements. This first-ever audit of AOC’s balance sheet is nearing completion and AOC officials expect the auditor’s report to be issued in the near future. Completing the first agencywide balance sheet audit represents a key step in AOC’s efforts to build a foundation of financial control and accountability. As we reported in January 2004, AOC planned to begin issuing a complete set of audited financial statements for fiscal year 2004. However, according to AOC officials, the fiscal year 2003 audit has taken longer and required more effort to support than initially planned and, as a result, management has now modified its plan for the fiscal year 2004 audit. In particular, AOC officials noted that establishing the historical cost for AOC building and improvement assets and values for AOC liabilities took longer and required more work than initially expected. AOC has decided to forgo an audit of a complete set of financial statements for fiscal year 2004 and instead will have its September 30, 2004, balance sheet audited. According to AOC officials, after consulting with AOC’s audit committee and its auditor, AOC decided to defer the audit of a full set of statements to fiscal year 2005 because of the limited time and CFO staff resources available to begin preparing for an audit of a complete set of statements for fiscal year 2004 and the need to work on strengthening internal controls, including ongoing efforts to address issues identified during the fiscal year 2003 audit. In conjunction with the audit of the September 30, 2004, balance sheet, AOC officials plan to ask the external auditor to perform additional procedures, which will be designed to help AOC better prepare for the audit of a complete set of AOC financial statements for fiscal year 2005. However, even though the auditor’s work for fiscal year 2004 will be limited to a balance sheet audit and some additional procedures, AOC has not revised its stated goal of receiving an unqualified opinion on a complete set of fiscal year 2005 financial statements. Regarding progress on internal controls during the 6-month period, AOC has adopted and implemented key policies and procedures in the areas of account reconciliations and funds control administration, and institutionalized its policies on inventory management and control. In addition, during this period, AOC’s external auditor has, as part of the balance sheet audit, been reviewing relevant internal controls. Account reconciliations, such as those adopted by AOC for accounts receivable and fund balance with Treasury, are a fundamental control over financial reporting. Funds control administration is a group of control processes that provide a means of establishing responsibilities and delegating authority to the managers who are to be accountable for the use and control of appropriated funds. In addition, AOC has made progress in institutionalizing its policies on inventory management and control. AOC finalized an inventory management policy in January 2004 and, shortly thereafter, initiated a major effort to train all inventory personnel on related procedures and controls. According to AOC officials, implementation of these policies and procedures came about as part of efforts to prepare the fiscal year 2003 agencywide financial statements. Finally, as we discussed, AOC expects the auditor to report on the results of the audit, including its internal control work, in the near future. The results of the auditor’s internal control work will provide AOC with valuable information as it pursues other efforts to strengthen internal controls and enhance financial control and accountability. We reported in January 2004 that AOC plans to issue a policy statement on internal controls by September 30, 2004. In this regard, AOC has begun the process of obtaining contractor support to assist management in developing an internal control framework, a related policy statement on internal controls, and related training. To speed their efforts and enhance its acceptance within the agency, AOC officials hope to model their internal control framework and related guidance after ones that are currently in use elsewhere in the legislative branch. As AOC prepares to implement an internal control framework, AOC needs to realize that establishing appropriate and effective accounting, compliance, and operational controls, which potentially may affect all aspects of AOC operations and activities, represents a significant agencywide challenge—one that will require senior management attention and support. During the 6-month review period, AOC made progress in preparing agencywide financial statements; supporting the audit of its September 30, 2003, balance sheet; and establishing related internal control policies and procedures. This progress provides a valuable baseline from which to further leverage the audit process through related efforts to improve internal controls over financial reporting and institutionalize financial management best practices. In addition, as plans move forward for a full scope audit of a complete set of financial statements for fiscal year 2005, AOC needs to provide strong and visible support for these efforts. To help strengthen and sustain AOC’s emerging foundation of financial accountability and control, we recommend that the Architect of the Capitol, the Chief Operating Officer, the Chief Financial Officer, and other senior management provide strong and visible support for efforts to prepare auditable financial statements and implement an effective internal control framework by monitoring the implementation and related milestones for each effort, ensuring the commitment to and support for each effort by participating AOC units, and acting to resolve any impediments that may arise. Developing and using meaningful financial reports by major operating units and implementing effective cost accounting processes and procedures can help extend responsibility for financial accountability and control to AOC’s operating units. An important aspect of having meaningful financial information available to managers in operating units is the ability to implement an appropriate level of cost accounting processes and procedures that can provide the kind of cost information needed to effectively manage operations. Using financial information at the major operating unit level that incorporates effective cost accounting processes and procedures can be a key component of AOC’s ongoing efforts to institutionalize financial management best practices in support of budgeting, financial, and program management at AOC. AOC has made progress in developing the capacity to produce automated financial reports for its major operating units (jurisdictions). The financial reports, which currently consist of financial statements for each major operating unit, are developed with the same basic processes and data used to produce AOC’s agencywide financial statements. AOC officials said that operating unit financial reports have not yet been provided to managers because CFO staff need to conduct an initial review and analysis of their content and operational managers need to receive some training on the content of the reported information and how it might be useful to them. AOC officials expect to begin distributing the financial reports and providing related training to operating unit managers by the end of March 2005. AOC officials acknowledged that providing managers with financial- statement-level information for their major operating units is only an initial step in developing financial and cost-related information that managers can use to enhance their operations. AOC’s December 2003 performance plan makes provisions for a multiyear plan for establishing AOC’s cost accounting goals and objectives and identifying and implementing system and procedural changes needed to accomplish them. However, the plan calls for only limited work to be completed through the end of fiscal year 2005, with major work scheduled for fiscal year 2006 through fiscal year 2007 to identify and implement system and procedural changes needed to have a cost accounting system operational in fiscal year 2008. In explaining the limited near-term progress planned for implementing cost accounting, AOC officials noted that successfully implementing cost accounting depends on an organization’s strategic goals, objectives, and related performance measures, which tend to drive the categories of costs and how the related data should be collected and reported. However, AOC officials noted that a recent AOC effort to study the potential for developing performance-based budgets indicated that AOC’s current strategic and performance plans do not define either the expected level of program performance or the actual results that should be achieved. Recognizing these limitations, AOC’s recently issued Cost Accounting Feasibility Study noted that AOC staff working on strategic planning are developing an agencywide approach that will identify appropriate and consistent performance metrics across major operating units. The effort is scheduled for completion some time in fiscal year 2005. The December 2003 performance plan indicates that the CFO staff plan to begin substantive work on the underlying studies and analyses needed to support recommendations on implementing cost accounting at AOC in fiscal year 2006. In explaining the schedule for implementing cost accounting, AOC officials said that it made more sense to defer substantive work on implementing cost accounting until agencywide performance measures and metrics are established, especially in light of the other ongoing tasks and priorities that the CFO’s office is responsible for leading. While AOC can now generate financial reports for major operating units annually and quarterly, substantial work remains to be done to conduct an initial review and analyze the form and content of the recently developed reports and to train operating managers on the information’s content and its potential use in managing and overseeing operating units. Also, while it may now be relatively easy and efficient to generate quarterly and annual financial reports for major operating units consisting of financial- statement-level information, it is not clear at this time how useful operating managers will find the information they contain. In addition, AOC does not have outcome and performance-based measures and metrics that can be used by operating managers to link financial information to outcomes and performance. As a result, substantial work remains to be done before AOC can provide managers with the meaningful financial, cost, and performance information needed to enhance their management of operating units and extend responsibility for financial accountability and control to the units. We consider AOC’s ongoing efforts to provide managers with operating unit financial information and training on the meaning and potential use of such information to be good initial steps in orienting AOC’s managers on the use of financial data to enhance operational management. However, once the managers are provided with timely financial statements for major operating units and the related training, AOC officials need to work with operating managers to assess the usefulness of the financial-statement-level information and to identify opportunities to expand or otherwise enhance the nature and type of information (e.g., detailed cost accounting information for specific projects and operating activities) made available to managers. With regard to cost accounting, AOC does not have the cost accounting processes and procedures needed to produce operation-specific cost information that can be used by managers to enhance their management of major operating units. Furthermore, AOC officials noted that they do not expect to begin substantive work on a multiyear effort to develop and implement system and procedural changes necessary to implement appropriate cost accounting at AOC until fiscal year 2006. The officials anticipate completing the needed system and procedural changes in fiscal year 2007 and having a cost accounting system operational in fiscal year 2008. AOC officials acknowledged that, in the interim, some opportunity exists to develop and apply selected high-level cost allocations that would allocate—over some reasonable basis—selected categories of overhead costs (e.g., the costs associated with operating functional activities such as human resources, finance, and budget) to major operating units. However, the officials also noted that the value or usefulness of such information is limited by the lack of specific cost accounting data on performance measures and metrics. The officials noted that the allocations would be, by their nature, done after the fact and operating unit managers would likely have little to no reasonable frame of reference or perspective on the level of overhead costs allocated to their operating unit or how those costs relate to their unit. While each of the reasons cited by AOC officials to support the anticipated time frame for implementing the needed cost accounting system has merit, we think it is reasonable to determine whether, prior to fiscal year 2006, substantial work can begin on the underlying studies and analyses that will be needed to identify options and develop tentative recommendations for implementing a cost accounting system at AOC. In addition, it is important for the CFO’s office to actively support and facilitate AOC’s efforts to develop organizational performance measures and metrics, which along with cost accounting information can be tracked and used to improve the operations and management of AOC’s major operating units. As we noted in our January 2003 report, it is also important for management to demonstrate its commitment to making and supporting needed changes, which include implementing operating unit financial reports and cost accounting throughout the organization. To enhance the successful development of useful financial, cost, and performance reporting for major operating units and appropriate cost accounting, we recommend that the Architect of the Capitol direct the Chief Operating Officer and the Chief Financial Officer to work with operating managers to assess the usefulness of financial- statement-level information, take an active role in AOC near-term efforts to develop agencywide performance measures, and review all available options to determine whether substantial work can begin, prior to fiscal year 2006, on the analyses needed to identify changes necessary to implement useful cost accounting at AOC, and have senior management visibly demonstrate its continuing commitment to and support for making AOC-wide system, procedural, and cultural changes necessary to provide managers with timely financial, cost, and performance information by monitoring the efforts’ implementation and related milestones, ensuring the commitment to and support for the efforts by participating AOC units, and acting to resolve any impediments that may arise. Information security is an important consideration for any organization that depends on information technology to carry out its mission. Without the proper safeguards, unauthorized access to systems can result in disclosure of sensitive information, fraud, disruption to operations, or attacks against other organizations’ sites. In our January 2003 report, we stated that effective information security management is critical to AOC’s ability to ensure the reliability, availability, and confidentiality of its information technology assets. Such AOC assets include the Computer-Aided Facilities Management system that is used to request and fulfill work orders for maintenance of the Capitol and the surrounding grounds, and the Records Management system that is used to archive architectural drawings pertaining to the U.S. Capitol, Library of Congress, Botanic Garden, and other buildings. We also reported that AOC took important steps to establish an effective information security program, but that much remained to be done before the agency would be in a position to effectively safeguard its information and technology assets. Accordingly, we recommended that the agency establish and implement an information security program. More specifically, we recommended that AOC (1) designate a security officer and provide him or her with the authority and resources to implement an agencywide security program, (2) develop policies for security training and awareness and provide the associated training, (3) develop and implement policies and guidance to perform risk assessments continually, (4) use the results of the risk assessments to develop and implement appropriate controls, and (5) monitor and evaluate policy and control effectiveness. In our January 2004 report, we stated that AOC laid some of the foundation for establishing an effective security program, such as designating an information security officer, giving this official the authority and responsibility to implement an agencywide security program, and beginning to draft information security policies. We also reported that AOC partially allocated the resources needed to begin to implement its security program, although more work remained to define and then execute the program. For example, we reported that AOC needed to follow through on its stated commitments to provide needed program resources, finalize its security policies, define processes for implementing the policies, and implement them. Since then, AOC has continued to make progress in implementing our recommendations, but important work remains in five basic areas of information security management. First, AOC has contracted to conduct security operations, risk management, policy assessment, and metrics activities, but it still needs to provide resources to its security program, including hiring two security specialists to conduct system risk assessments. Second, AOC has developed security training and awareness policies and began implementing them, but it still needs to provide the training and awareness to all employees who use information technology assets. Third, AOC has developed policies and guidance to perform system risk assessments and conducted risk assessments on agency mission- critical support systems, but it still needs to complete assessments on 4 mission-critical major applications and 34 other agency systems. Fourth, AOC has developed guidance that for resolving identified risks and begun implementing it on those support systems that it has assessed, but it still needs to develop and implement controls to address any risks that may be identified by the yet-to-be-completed assessments. Fifth, AOC has defined a metrics policy and a plan for monitoring and evaluating the effectiveness of its controls and begun measuring its support systems controls, and it plans to complete initial data gathering on defined metrics by December 2004 and report on them by March 2005. However, the agency still needs to do the same for any controls assessments implemented as a result of any yet-to-be compiled risks. AOC plans to complete work in most of these areas over the next 8 months. Specifically, it plans to (1) expedite the modification of an existing contract to hire two security specialists by August 2, 2004; (2) complete security awareness activities for all AOC employees between July and the end of November 2004, develop role-based security training and begin implementing it in fiscal year 2006; (3) complete the risk assessments on its mission-critical major applications by September 30, 2004; (4) subsequently develop and implement controls to address any risks identified by the yet to be completed assessments; and (5) complete the initial data gathering on security policy and control metrics by December 2004 and issue its first report on their effectiveness by March 2005. As we reported in January 2003, successfully completing these plans depends in large part on the commitment and leadership of AOC senior managers. Such commitment and leadership will require the timely allocation and application of needed resources and close oversight of plan execution. Without such support and the resultant improvements to AOC’s information security management capabilities, the agency will be challenged in its ability to effectively safeguard its data and information assets. Worker safety has been a long-standing concern at AOC because it has had higher injury and illness rates than many other federal agencies. As we stated in our January 2003 and January 2004 reports, identifying, developing, and implementing performance measures is important for holding employees and management accountable, evaluating the effectiveness of the safety training curriculum, and reducing workers’ compensation injuries and costs. These performance measures are an important link between the achievement of AOC’s safety plan goals and the organizations’ strategic goals. Moreover, meaningful, transparent, and timely performance measures are critical to worker safety efforts because they help organizations gather feedback on performance, evaluate the effectiveness of policies, and make worker safety the cultural norm. In our January 2003 report, we made three recommendations that relate to AOC developing performance measures to track worker safety across the organization: Identify performance measures for safety goals and objectives, including measures for how AOC will implement the 43 specialized safety programs and how superintendents and employees will be held accountable for achieving results. Establish a safety training curriculum that fully supports all of the goals of the safety program and further evaluate the effectiveness of the training provided. Establish a senior management work group that will routinely discuss workers’ compensation cases and costs and develop strategies to reduce these injuries and costs. In our January 2004 report, we noted that AOC was making progress in addressing all three of these recommendations. Over the 6 months we reviewed, AOC made progress in developing performance measures to track the agency’s worker safety efforts, but the implementation of these measures is a work in progress. AOC developed several broad performance measures to judge the success of its safety and health program. First, AOC developed a measure of the number and severity of safety and health deficiencies that exist at AOC (a baseline assessment was completed). Second, AOC developed a measure to assess employees’ attitudes and beliefs about safety within their organization—a gauge of employees’ perceptions of safety at AOC. The initial survey was administered between December 2003 and January 2004. In February 2004, AOC completed analysis of the survey responses and developed recommendations to improve worker perceptions about safety. Finally, AOC uses the rate at which employees suffer a job-related injury and illness to monitor workplace safety as established by the Department of Labor’s Occupational Safety and Health Administration (OSHA). AOC discussed its injury and illness rate in public testimony to demonstrate its commitment to a safe and healthy work environment. AOC is also developing specific performance measures related to worker safety. First, AOC senior management, through quarterly council meetings known as the Safety, Health and Environmental Council (SHEC), has developed measures to assess and control workers’ compensation costs including (1) the number and severity of injuries and illnesses, (2) the number and cost of workers’ compensation injuries and illnesses, (3) the number of lost production days associated with workers’ compensation cases, and (4) the number of modified work assignments. In addition, SHEC has developed several tools aimed at raising employee awareness of safety and the link between safety and workers’ compensation. Second, AOC has developed performance measures for several of the 34 safety policies. The purpose of the safety policies is to establish consistent requirements for AOC with agencies such as OSHA and the Environmental Protection Agency. These safety policy measures include (1) the number of safety inspections and audits performed, (2) the number of safety deficiencies, and (3) the number of employees trained. Finally, AOC has continued to assess training performance by asking participants to evaluate training courses; having subject matter experts from the Safety, Fire, and Environmental Program Office audit the courses; and soliciting informal feedback from participants’ supervisors. Moreover, AOC has begun to notice that safety training participants are applying lessons learned from training. For example, AOC officials reported that training on blood-borne pathogens resulted in employees applying the principles learned during a recent event. In addition, AOC plans to assess employee knowledge and behavior to ensure compliance with the policy, including interviewing employees on the requirements of a jurisdictional standard operating procedure. However, AOC has made little progress on developing tools necessary for more complex assessments of its training and development efforts, such as measuring their impact on overall program or organizational results, or comparing the benefits of training efforts against their costs. We have previously reported on the potential value—and challenges—associated with these more complex approaches of assessing training and development. When deciding on an evaluation strategy, agencies (such as AOC) should select an appropriate analytical approach that will best measure the effect of their programs while also considering what is realistic and reasonable given the broader context and fiscal constraints. AOC has made progress in its safety performance measures. However, AOC can strengthen its efforts to evaluate workplace safety. First, while AOC’s measure of the number and severity of safety and health deficiencies is a positive step, time frames to correct these deficiencies still need to be established. Although AOC has plans to develop a measure of the timeline in which hazards and deficiencies are corrected, it has yet to complete this measure. Second, while the safety perception survey provided information on how employees’ perceive safety in their work environment, the full potential of this tool has not been fully recognized. AOC officials stated that the survey should not be considered a measure of safety performance and have no plans to administer the survey on a recurring basis. However, the survey is a valuable performance measure of employees’ perceptions about workplace safety and could prove useful if conducted in the future. For any future use, AOC would need to first address design and implementation weaknesses. For example, pretests of the survey were not conducted, survey instructions were poorly worded, the questions allowed for a biased response, and the design did not allow for a non-response analysis. Moreover, given the low response rate of frontline employees, AOC should be hesitant to represent findings as reflective of the employee population. AOC has fulfilled our third worker safety recommendation listed above by developing performance measures to assess the long-term impacts and trends of workers’ compensation injuries and costs. Through SHEC, safety officials working with HRMD are coordinating an exchange of information and data in order to control workers’ compensation injuries and costs. HRMD, through its Workers’ Compensation Program Unit, gathers work- related injury and illness data. In addition, the increased emphasis on safety and its relationship to workers’ compensation injuries and illnesses is being promoted at SHEC meetings. To enhance worker safety performance measures at AOC, we recommend that the Architect of the Capitol direct the Chief Operating Officer to expand upon its safety perception survey by developing a more rigorous methodological approach and collecting such information on a more regular basis. Developing a Capitol complex master plan is critical to strategic project management because it would help facilitate consistent management and oversight of projects and establish long-term priorities. A key component of a master plan is conducting facility condition assessments (FCA), which are systematic evaluations of an organization’s capital assets. Such evaluations would help AOC to “compare conditions between facilities; provide accurate and supportable information for planning and justifying budgets; facilitate the establishment of funding priorities; and develop budget and funding analyses and strategies.” FCAs also help to assure that the Capitol complex’s preventative maintenance needs are fully documented and provide data for an overall plan with which to address those needs. Further, a Capitol complex master plan could help guide day- to-day prioritization by being the basis for communicating, both internally and externally, the trade-offs that result from prioritizing one project over another, or how individual projects fit within a broader AOC framework. In our January 2003 report, we identified two recommendations that would help AOC facilitate consistent management and oversight of projects and establish long-term priorities: Develop a Capitol complex master plan and complete condition assessments of all buildings and facilities under the jurisdiction of AOC. Develop a process for assigning project priorities that is based on clearly-defined, well-documented, consistently-applied, and transparent criteria. In our January 2004 report, we noted that AOC was making progress initiating the Capitol complex master planning process, although the expected completion date had already been pushed back 8 months to December 2006. FCAs for the three largest jurisdictions were also behind the original schedule because all the contracts were not to be awarded until December 2003. AOC was also making progress creating a clearly defined, well-documented, and transparent process for evaluating and prioritizing projects by developing criteria to managers for scoring projects across five rating areas—preservation, impact on mission, economic impact, safety, and security. During the 6-month review period, AOC took steps to develop the Capitol complex master plan. For example, senior AOC officials reported that the contract for the Capitol complex master plan would be awarded in August 2004. These officials also stated that work has been initiated on a facilities plan for the House office buildings, which will be incorporated into the Capitol complex master plan. FCAs for the three largest jurisdictions—the House, Capitol, and Senate—are under way and scheduled to be completed in October 2004. While AOC’s fiscal year 2004 annual performance plan established November 2005 as the target to publish a draft Capitol complex master plan, and December 2006 as the target to publish the final version of the plan, senior AOC officials reported that they now intend to publish a draft of the Capitol complex master plan in February 2006, with the final version published in June 2007. With respect to project prioritization, AOC reported that the process of scoring projects in 2004 went smoothly. Specifically, agency officials noted that there were very few scoring discrepancies between jurisdictional superintendents and senior AOC officials. AOC officials also noted that the scoring process will be used to determine what projects will be submitted for funding in the fiscal year 2006 budget. While AOC has taken steps to develop the Capitol complex master plan, AOC officials noted that 12–16 months were added to incorporate comments and finalize the plan. This is the second time target completion dates have been changed to a later date. In addition, AOC officials need to involve their stakeholders early and throughout the Capitol complex master planning process. Given the importance and sensitivity of the master plan and the condition of the Capitol complex, as well as the difficult trade-offs that the current budget environment demands, congressional and other stakeholder involvement early and throughout the development of the master plan is key to its ultimate acceptance and value, which did not occur during the development of a similar plan in 1981. Senior agency officials reported that AOC intends to define the scope of work for the remaining jurisdictions after they complete lessons learned from the first round of FCAs to identify areas that may improve the effectiveness and efficiency of the process. This is an appropriate step if it does not delay the start of future FCAs. Furthermore, completion of the FCAs for the remaining jurisdictions will depend on when funding is received. While FCAs are a key component of the master plan, and ultimately need to be integrated into the plan, AOC’s master planning efforts can begin before FCAs are completed. Further, once the FCAs are completed it is critical that they are updated regularly. With regard to project prioritization, AOC has created a clearly-defined, well-documented, and transparent process for evaluating and prioritizing projects. The evaluation criteria will be used to determine which projects will be submitted for funding in the fiscal year 2006 budget cycle. In addition, although the project prioritization process is a useful tool internally, the process does not address the underlying need to inform and get agreement from congressional and other stakeholders on how and why AOC submits specific projects for funding. In order to improve Capitol complex master planning efforts, we recommend that the Architect of the Capitol, with support from the Chief Operating Officer, lead efforts to ensure that congressional and other stakeholders are engaged early and throughout the development of the Capitol complex master plan. In order to improve the process for prioritizing projects, we recommend that the Architect of the Capitol, with support from the Chief Operating Officer, lead efforts to ensure that AOC informs and obtains agreement from congressional and other stakeholders on how and why specific projects are submitted for funding. It is estimated that recycling 1 ton of paper saves 17 mature trees, 3.3 cubic yards of landfill space, 7,000 gallons of water, 380 gallons of oil, 4,100 kilowatt hours of energy, and 60 pounds of air pollutants. Over 12,000 tons of waste is created annually within the Capitol Hill complex. Much of the waste generated is from office, construction, and maintenance activities and includes such materials as paper, wood, plastic, and metal. AOC is responsible for implementing recycling programs for much of the Capitol Hill complex and has taken steps both centrally, and at the jurisdiction level, to improve the overall effectiveness of its recycling programs. In our January 2003 report, we recommended that AOC take a more strategic approach to improve the effectiveness of its recycling programs. Specifically, we recommended that AOC develop a clear mission and goals for AOC’s recycling programs with input from key congressional stakeholders as part of its proposed environmental program plan. We further recommended that AOC establish reasonable goals based on the total waste stream that could potentially be recycled—information it plans to obtain as part of its long-term environmental program plan. In our January 2004 report, we noted that AOC had begun taking the first steps toward a more strategic approach for its recycling programs. In accordance with our recommendation, and as part of its broader environmental program plan, AOC began collecting information on its facilities and operations through a baseline assessment and waste stream analysis. The baseline assessment evaluated the compliance of AOC facilities and operations with federal, state, and local environmental regulations; the waste stream analysis identified the types of waste created at AOC facilities and possible pollution prevention opportunities, such as waste elimination, reuse, or recycling. AOC will clarify the mission, goals, and measures of its recycling programs as a component of pollution prevention. According to AOC officials, the results of the assessment and analysis will provide a basis for establishing program priorities and measuring future progress. We further noted in our January 2004 report, that AOC was planning to obtain stakeholder input on the environmental program plan beginning in the second quarter of fiscal year 2004, after completion of the baseline assessment and waste stream analysis. Over the 6 months we reviewed, AOC made progress in the development of its environmental program plan and its movement toward a more strategic approach. In particular, AOC has completed the baseline assessment as well as the waste stream analysis for its facilities and operations. Moreover, AOC is expanding its waste stream analysis—which currently covers office, construction, and maintenance waste—to include electronic waste (e-waste), such as outdated computer equipment. AOC is also developing pollution prevention plans based on the results of the waste steam analysis. In May 2004, AOC received input from internal stakeholders on the draft environmental plan and expects to complete the environmental plan in October 2004 as well as receive congressional input early next year. AOC has made progress toward developing a mission and goals for its recycling programs in accordance with our January 2003 recommendation. The completion of the baseline assessment and waste stream analysis are good first steps toward developing a more comprehensive environmental program plan and should provide a sound basis for establishing program priorities and measuring future progress. The results from these efforts should also help AOC develop targeted pollution prevention plans. Finally, the input received from internal stakeholders on the environmental plan, as well as the expected input from congressional stakeholders early next year, should prove to be invaluable in keeping AOC moving forward strategically. It is critical, however, that at least preliminary congressional input be obtained prior to the environmental plan’s completion to help ensure that the plan is consistent with the interests and expectations of congressional stakeholders and that AOC’s efforts and resources are targeted at the highest priorities. To further assist AOC in developing a more strategic approach for its recycling programs and to ensure that congressional input is obtained when it would be most useful, we recommend that the Architect of the Capitol direct the Chief Operating Officer to obtain preliminary input from congressional stakeholders on its environmental program plan— particularly as the plan relates to the mission and goals of AOC’s recycling programs—prior to the completion of the plan. One of the most important issues raised in our January 2003 report was the need for Congress to create a Chief Operating Officer (COO) position to serve as the central leadership point to improve AOC’s executive decision- making capacity and accountability. The Consolidated Appropriations Resolution, 2003 (Public Law 108-7) established, in section 1203 of Division H, the new position of Deputy Architect/Chief Operating Officer within the Office of the Architect of the Capitol. Subsection 1203(e) required that the COO prepare an action plan describing “the policies, procedures, and actions the will implement and time frames for carrying out the responsibilities under this section.” The responsibilities described include implementing AOC’s mission and goals, providing overall organization management, assisting the Architect in promoting reform, and measuring results. The action plan was to be submitted to the Committees on Appropriations of the Senate and House of Representatives and the Committee on Rules and Administration of the Senate not later than 90 days after appointment of the COO, which occurred on July 28, 2003. The COO Action Plan, however, was not submitted to the committees until December 22, 2003— 59 days late. The COO Action Plan and the Report to the Congress from Deputy Architect/Chief Operating Officer provide a list of 31 action items and their expected completion dates across six business areas: 1. Organizational Management and Structure – 12 action items, 2. Project Management – 6 action items, 3. Customer Service – 3 action items, 4. Strategic Planning – 5 action items, 5. Communications – 3 action items, and 6. GAO Management Review Recommendations – 2 action items. Overall, the plan’s high-level description of the action items assumes that Congress and other users have a deep and detailed knowledge of AOC’s goals, internal operations, and management functions—a level of knowledge that is not reasonable to expect. The plan’s action items are described at such a high-level that it does not make clear how the COO would carry out his legislated responsibilities or help lead transformational change at AOC. For example, the legislation states that the COO is responsible for proposing organizational changes and staffing needed to carry out AOC’s mission and goals. While the COO’s report expresses the need for organizational changes and highlights expected improvements, it does not describe what specific changes the COO envisions or how changes would be accomplished. However, AOC included proposed organizational changes in its fiscal year 2005 budget justification. The House Appropriations committee did not approve those changes because, in the committee’s view, the AOC proposal does not reflect Congress’ intent to assign the COO the responsibility for AOC’s overall direction, operation, and management to improve AOC’s performance. The COO action plan also does not detail how the individual action items would be accomplished or how performance would be measured. For example, the first item listed in the action plan is to “review/update AOC’s organizational structure to better align with the strategic plan and operational mandates,” yet the plan lacks necessary details, including how such a review would be conducted, the time frame in which it would be completed, who would be involved in the review, or how progress would be measured. While the plan does list action items according to broad subject areas, for example “organizational management and structure,” the plan does not prioritize items that appear to be related nor does it identify the required resources or organizational units that are being delegated those action items. The plan could also better communicate whether the action items are standalone or dependent upon each other to accomplish the COO’s responsibilities. Finally, even though the legislation required that the COO action plan be developed concurrently and consistently with the strategic plan, the plan did not include a direct crosswalk to the AOC strategic plan, which was released on December 15, 2003, nor did it provide a clear picture of how the action items will help accomplish the agency’s mission and goals. To enhance the usefulness of the COO action plan, we recommend the Architect of the Capitol and the Chief Operating Officer consult with members of Congress and key committees on the specific information regarding AOC’s plans, policies, procedures, actions, and proposed organizational changes. As part of this effort, the Architect and the COO should work with Congress to determine Congress’ information needs and the timing and format of delivery of that information that will best meet Congress’ needs. Furthermore, consistent with our findings and recommendations with respect to congressional and other stakeholder involvement in general and the Capitol complex master plan in particular, as well as our original January 2003 management review, specific emphasis should be placed on AOC’s project management. Particular issues to be discussed could include how AOC’s projects’ priorities are determined, AOC monitors and controls project cost, quality, and timeliness, AOC uses lessons learned from projects and seeks to incorporate best project management accountability is assigned and managed, and AOC determines the best mix of in-house and contractor support when designing projects. Subsequent COO action plans and status reports will likely be most helpful to Congress to the extent that they are rigorously specific as to the problem or issue that needs to be addressed, the actions that are being taken in response, the progress to-date, and milestones for additional actions. As we noted in our two previous reviews, organizational transformation does not come quickly or easily and the changes under way at AOC require a long-term concerted effort. AOC has made progress in addressing the eight key management control issues and the corresponding recommendations outlined in this report; however, AOC management will need to build on its efforts to date and more fully engage congressional and other stakeholders to ensure that their interests and expectations are incorporated into AOC’s organizational transformation. For example, involving stakeholders in the development of a comprehensive strategy to improve internal and external communications, the formulation of a Capitol complex master plan, and the establishment of a recycling mission and goals will be critical in successfully addressing these key issues. As AOC works to establish its strategic management and accountability framework and address long-standing areas of concern, it must continue to demonstrate progress on each of these eight key issues to help it sustain the momentum needed to accomplish its organizational transformation, particularly in engaging its congressional and other stakeholders. We provided the Architect of the Capitol a draft of this report on July 26, 2004, for review and comment. We received written comments from the Architect on August 13, 2004, and they are reprinted in appendix I. In his comments, the Architect generally agreed with our findings and conclusions. He suggested technical changes and provided additional information related to information security, safety performance measures, and Capitol complex master planning that were incorporated into our report where appropriate. The Architect also noted his agreement with each new recommendation, except for those regarding worker safety performance measures, Capitol complex master planning, and the process for prioritizing projects. Regarding worker safety performance measures, we reported that AOC’s safety perception survey had design and implementation weaknesses and, therefore, we recommended a more rigorous methodological approach to the survey. In response, the Architect stated that they found AOC’s employee response rate to be 68 percent. While this rate is approaching the 70 percent cut-off that is considered minimally acceptable for this type of survey, only 49 percent of the subgroup of frontline employees (e.g. carpenters, plumbers, and custodial workers) returned a completed survey, according to AOC’s summary report. As frontline employees are most at risk of work-related injuries, their low response rate makes it difficult for AOC to draw meaningful conclusions about these employees’ attitudes and beliefs towards safety. The Architect also noted that the survey used a number of benchmark questions that have previously been used in other surveys. Nonetheless, AOC’s lack of a pre-test of the entire instrument does not give AOC assurance that employees interpreted the questions in the manner AOC had expected. In fact, officials in one AOC jurisdiction were concerned that some questions could be misinterpreted and therefore had employees complete their individual survey in a group like setting. Thus, we believe that AOC’s safety perception survey would still benefit from a more rigorous methodological approach. In our draft report, we also recommended that the Architect direct the COO to improve Capitol complex master planning efforts and the process to prioritize projects by (1) ensuring that congressional and other stakeholders are engaged early and throughout the development of the Capitol complex master plan, and (2) ensuring that AOC informs and obtains agreement from congressional and other stakeholders on how and why specific projects are submitted for funding. According to the Architect, he has and will continue to lead the Capitol complex master plan initiative. In addition, the Architect noted his plan to get an Architect/Engineer (AE) onboard so that they can jointly meet with and ensure that stakeholders are engaged at the beginning and throughout the development of the master plan. We agree that the Architect’s personal involvement in the development of the Capitol complex master plan will be critical to its success and the Architect’s commitment to engage stakeholders is consistent with our recommendation. However, because the COO position was created to serve as the central leadership point to improve AOC’s executive decision-making capacity and accountability, the COO should also be involved in the master planning process, project prioritization, and communication with stakeholders. As such, we made revisions to the two recommendations to address the Architect’s concerns so that the Architect, with support from the COO, leads efforts to implement the recommendations. In addition, the Architect questioned the direct link between the master plan and the prioritization process. We continue to believe that the Capitol complex master plan and project prioritization should be linked because a master plan could help guide day- to-day prioritization by being the basis for communicating, both internally and externally, the trade-offs that result from prioritizing one project over another, or how individual projects fit within a broader AOC framework. We made revisions to the two recommendations to provide greater clarity by addressing the master plan and the prioritization process separately. We are sending copies of this report to interested congressional parties. We are also sending a copy to the Architect of the Capitol. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have general questions concerning this report, or specific questions concerning strategic management or human capital issues, please contact J. Christopher Mihm or Steven Lozano at (202) 512- 6806 or by e-mail at [email protected] or [email protected]. In addition, if you have specific questions concerning financial management issues, please contact Jeanette Franzel or John Reilly at (202) 512-9471 or by e-mail at [email protected] or [email protected]. If you have specific questions concerning information technology issues, please contact Randolph Hite or Carl Higginbotham at (202) 512-3439 or by e-mail at [email protected] or [email protected]. Key contributors to this report are listed in appendix II. Individuals making key contributions to this report included Kevin J. Conway, Jeffery Bass, Thomas Beall, Justin Booth, Terrell Dorn, Andrew Edelson, Steven Elstein, Brett Fallavollita, Joel Grossman, Carl Higginbotham, Dan Hoy, John Johnson, Steven Lozano, Mamesho Macaulay, Jeff McDermott, David Merrill, John Reilly, Kris Trueblood, Carl Urie, Michael Volpe, and Daniel Wexler. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
The Conference Report on the Consolidated Appropriations Resolution, 2003, directed GAO to monitor AOC's progress in implementing recommendations contained in GAO's management review of AOC's operations, issued in January 2003. This is the second status report in which GAO examines the actions taken by AOC to implement selected GAO recommendations. Additionally, the Consolidated Appropriations Resolution, 2003, mandated GAO to assess AOC's Chief Operating Officer's (COO) action plan. This report provides that assessment. AOC has made progress on key management control issues, but substantial work remains to achieve sustained, long-term management improvements and organizational transformation. These key issues include (1) stakeholder involvement, (2) employee communications, (3) auditable financial statements and related internal controls, (4) financial reporting for operating units and cost accounting, (5) information security management, (6) worker safety performance measures, (7) Capitol complex master planning, and (8) strategic management of recycling. For example, AOC has not fully engaged its congressional and other stakeholders in developing a clear, transparent, and documented understanding of how AOC sets project priorities and how progress will be assessed. AOC has taken some steps to involve its stakeholders by delivering planning documents and responding to requests for information. AOC has made progress addressing employee communications issues and can maintain momentum by fully and effectively implementing its planned initiatives. AOC has made progress in preparing auditable agencywide financial statements; however, it has deferred the audit of a complete set of financial statements from fiscal year 2004 to fiscal year 2005. Also, substantial work remains before AOC can provide its managers with the meaningful financial, cost, and performance information needed to enhance their management of operating units. AOC has continued to make some progress establishing the management foundation for effective information security management, but much remains to be accomplished, such as completing system risk assessments and monitoring and evaluating its security policies and controls. Additionally, AOC has developed performance measures to track worker safety, but work remains to ensure successful implementation of these measures. In regard to project management, AOC has taken steps to develop a Capitol complex master plan and expects it to be available for stakeholder comment in February 2006. Given the importance of the master plan, stakeholder involvement early in and throughout its development is key to the plan's ultimate acceptance and value. Similarly, AOC has made progress developing a mission statement and goals for its recycling program as part of its broader Environmental Program Plan, although AOC does not expect to obtain congressional input until after the plan has been completed--an important omission. The Architect and the COO need to work with Congress to determine Congress' information needs--with a specific focus on AOC's project management--and the timing and format of delivery of that information that will best meet Congress' needs. The COO Action Plan was submitted to Congress on December 22, 2003--59 days late. Overall, the plan's high-level description of action items assumes that Congress and other users have a deep and detailed knowledge of AOC's goals, internal operations, and management functions--a level of knowledge that is not reasonable to expect.
|
In 1991, DOD consolidated debt management within DFAS. At DFAS Columbus, two offices are involved in collecting contractor debts owed to the government. The Accounts Receivable Branch manages all newly identified debts. In managing new debts the Accounts Receivable Branch is to use DOD’s policies and procedures that detail the requirements for an initial demand letter to the contractor, followed by efforts to offset the debt against amounts DOD owes the contractor, followed by a second demand letter. The procedures specify that any debt of $600 or greater that has not been resolved after two demand letters is to be transferred to the Debt Management Office. The Office manages debts in excess of $600 owed to DOD by unresponsive contractors and debts by contractors that agreed to repay the amounts owed in installments. After the transfer from Accounts Receivable, the Debt Management Office is to review the file to ensure that the debt is valid and adequately supported, and that the file does not contain an installment request, deferment request, or bankruptcy notification. If the debt is erroneous or without clear legal merit, the Office can terminate collection action. After determining that the debt is valid and should be collected, the Office is to issue a third and final demand letter to give the contractor a final opportunity to settle the debt. The Debt Management Office is to refer debts to the Defense Criminal Investigative Service if illegal activities by the contractor could be involved. Also, the Office is to undertake other collection actions, such as transferring debts to collection agencies, adding a contractor’s name and amount owed to the List of Contractors Indebted to the United States, or referring debts to the U.S. Treasury’s centralized debt collection programs. The U.S. Treasury’s centralized debt collection programs, the Treasury Offset Program (TOP) and Cross-Servicing Program, were developed to assist agencies in collecting delinquent nontax debt owed to the federal government. TOP is a governmentwide debt matching and payment offset program that uses certain of the Treasury Financial Management Service’s payment data to collect delinquent nontax debts. It uses a centralized delinquent debtor database to match specific delinquent debts against certain payments to be made by the government. The Cross-Servicing Program, like TOP, was developed in response to the Debt Collection Improvement Act of 1996. Under this program, agencies are to identify and refer eligible delinquent nontax debt to Treasury’s Financial Management Service for collection. The program includes the use of demand letters, credit bureaus, private collection agencies, as well as referrals to TOP. After appropriate analyses and collection action, DFAS Columbus can terminate a debt of $100,000 or less if the Debt Management Office is unable to collect any substantial amount, unable to locate the debtor, or determines that cost will exceed recovery. Debts of over $100,000 must be submitted to the Department of Justice, which will determine whether the collection activity should be terminated. According to DOD and Debt Management Office policies and procedures, when it receives a debt, the Debt Management Office is to analyze the debt file to ensure that the debt is valid and that collection efforts should proceed. The Accounts Receivable Branch should have established the debt file before the first demand letter. The file is to contain documentation supporting the debt, such as copies of contract vouchers related to the debt, amounts and dates of collections received, and all demand letters and other correspondence with the debtor. For duplicate payments, the file should include copies of the negotiated checks. The procedures also specify that any documentation that supports the debt should be included with the demand letter. We found that the debt files did not contain required documentation. Copies of checks issued or disbursements records were not in the files and were not included with demand letters. Without required documentation, the Office could spend time and effort on pursuing debts that are not valid and have difficulty in collecting debt if documents are not available to convince contractors of the debts’ validity. In our detailed review and follow-up of 10 cases, we found invalid debts as well as valid debts that were not being collected because needed documents were not used to substantiate the debt. We identified two debts, recorded at $38,557, that were not valid. In one case, a debt of $17,339 was being pursued as a duplicate payment because two invoices for similar services and amounts were paid a few days apart. However, after we noted that the invoices had different shipment numbers, both invoices were determined to be valid and the debt was canceled. In the other case, the contractor had previously reimbursed the government, but the reimbursement had not been correctly recorded against the debt. For a third debt case of $47,539, the U.S. Attorney’s office declined to pursue collection because the contractor was no longer in business and the debt resulted from questioned costs during a contract audit rather than actual overpayments. For five of the remaining seven cases, the contractors reimbursed the government for more than $103,897 after we obtained and provided documentation to the contractors that established the validity of the debts. For example, the files for an overpayment of $30,788 to a contractor did not contain copies of negotiated checks and payment vouchers. After we obtained and provided supporting documents to the contractor’s representative, the representative was able to trace through contract records and substantiate that the overpayment did occur. The contractor paid $32,259, which was the original debt plus accrued interest and administrative fees. The problem of inadequate documentation in debt files is not new. A DOD Inspector General report in 1995 noted that some files did not have adequate documentation. More recently, in 2000, the DFAS Columbus Office of Internal Review noted significant documentation deficiencies in the Debt Management Office’s debt files. The Debt Management Office, after validating the debt, is to send a third and final demand letter to the debtor. DOD policies and procedures state that the second demand letter is to be sent when the due date specified in the first demand letter (30 days) passes. A time frame for sending the third demand letter is not specified; however, the policies specify that collection of debts owed by contractors be accomplished expeditiously. The Debt Collection Improvement Act of 1996 requires agencies to refer for debt collection all eligible nontax debts over 180 days delinquent to the Secretary of the Treasury or a Treasury-designated debt collection center. The Debt Management Office is not promptly sending the third demand letter. In the previously cited DFAS Columbus Office of Internal Review report, a sample of 15 cases showed that third demand letters were not processed promptly after the debts were entered into the Debt Management Office information system. Of the 15 debts reviewed, no letters were issued within 30 days; six letters were issued in less than 100 days; six were issued from 100 to 200 days; and three were issued at 691, 835, and 1,147 days. The previously mentioned DOD Inspector General’s report also identified serious problems with the timeliness of the third demand letter. For the 115 debts in its sample for which third demand letters should have been sent, the Inspector General determined that 51 letters were sent late and 20 letters were not sent. Federal regulations provide that each federal agency take timely and aggressive actions to collect all claims. DOD policies and procedures state that it is essential that the amounts contractors owe to DOD be ascertained promptly and that collection be accomplished expeditiously. Based on the cases we examined and discussions with DFAS officials, we found that the debt collection process DFAS uses is passive—that is, it essentially relies on sending demand letters. It does not involve establishing a dialogue with contractor officials and DOD contract officers to identify and resolve issues related to the debts. Neither DOD policies and procedures nor Debt Management Office procedures discuss proactively pursuing debt collection by establishing communication with appropriate officials. Based on the cases that we investigated, we believe that improved communication between DFAS personnel, contractor officials, and DOD contract officers is essential to resolving debts promptly and efficiently. As previously mentioned, we facilitated debt repayment in several cases by explaining to contractor officials how the debts were incurred and the relation of the debts to outstanding invoices and payments. On one case that resulted from duplicate payments on an invoice, $16,645 was collected after we contacted the administrative contracting officer, DFAS Columbus collection personnel, and contractor officials. We provided proof of the duplicate payment to the contractor and explained how the debt was incurred. Because the contractor believed that the debt should be offset against two other invoices, we examined the status of those invoices and explained to the contractor why an offset was not possible. Prior to our involvement, the contractor had been sent three demand letters over a period of 4 months, but no other contact had been made with the contractor about the debt. The Debt Collection Improvement Act of 1996 seeks to maximize collections of nontax delinquent debts owed to the government by ensuring quick actions and employing all appropriate collection tools, such as private sector professional collection agencies. When debts become delinquent, DFAS Columbus attempts to collect the debts through administrative offsets against other DOD payments to be made to contractors. Also, by adding a contractor’s name to the List of Contractors Indebted to the United States, DFAS in essence notifies other federal agencies of a contractor’s indebtedness so that the agencies can withhold payments and send the amount owed to DOD. If warranted because of potential illegal contractor activities, the debt can be referred to the Department of Justice or to the Defense Criminal Investigative Service. We identified two limitations to the administrative offset process being used at DFAS Columbus. First, only payments authorized for disbursement from one DOD payment system were being considered for offset, although a contractor could be receiving payments through other DOD systems. Second, offsets were being taken against only one contractor identification code, even though a contractor can have multiple identification codes to identify specific facilities/locations. After we brought these limitations to the attention of DFAS Columbus officials, they said that changes were made in January 2001 to permit offsets against (1) other payment systems and (2) all payments to a contractor, even if different identification codes are involved. These changes, if effectively implemented, should improve the effectiveness of debt collection by DFAS Columbus. We also found that the Debt Management Office was not effectively and fully utilizing the U.S. Treasury’s Cross-Servicing Program and TOP. The Cross-Servicing Program, which includes the use of demand letters, credit bureaus, referral to TOP, and private collection agencies to collect delinquent debts, was not being utilized. According to the previously cited DFAS Columbus Internal Review report, an official said that the Office was exempt from the cross-servicing requirement of the Debt Collection Improvement Act of 1996 because it was using TOP. According to the law and regulations, the referral of debt to TOP uses one of the Treasury’s debt collection tools, but it does not satisfy cross-servicing requirements. However, referral of a debt to Treasury for cross-servicing satisfies the TOP requirement since the Cross-Servicing Program includes the use of TOP. DFAS headquarters officials agreed that the Office was not exempt from the requirements of the Treasury Cross-Servicing Program. While the Office was referring debts to TOP, the Office of Internal Review’s report noted that referrals were not timely. Timely referrals are important because the likelihood of collecting delinquent debt diminishes as the debt ages. For 15 sample cases cited in that report, the number of days from the final demand letter to referral to the offset program ranged from 63 days to 760 days, with seven of the referrals taking place after 250 days. The report also documented other problems with the Debt Management Office’s use of other referral and collection activities. For example, with regard to referral of debts to the List of Contractors Indebted to the United States, the report noted that in a sample of 15 debts, 6 debts were not referred, and 4 of the 9 referred debts were not referred in a timely manner. Our investigation of the 10 cases and discussions with DFAS Columbus officials indicate that the Debt Management Office is not making appropriate referrals to the Defense Criminal Investigative Service and the Department of Justice. We referred two cases, which involved debts totaling $365,278, to the Defense Criminal Investigative Service and the U.S. Attorney’s office because of possible fraud. In one case, the Debt Management Office, rather than pursue the debt, had requested and received write-off authority for a debt of $211,882 from the Department of Justice. The debt, as calculated by DFAS, was due to a combination of an overpayment, progress payments made but not recovered, and fees incurred on the contract. Because we identified indications of fraud when reviewing the file and discussing the case with DFAS and contracting personnel, we referred the case for further investigation. The file contained a letter from the administrative contracting officer notifying DFAS Columbus that the contractor had been evicted from the address of record, but could be operating another business from a nearby location. The contracting officer prepared the letter in response to DFAS Columbus’ request for information after the first demand letter, sent to the contractor’s address of record, was returned. A second demand letter was sent to the nearby location, but was returned as unclaimed. After we referred the case for further investigation, the U.S. Attorney’s office stated that discussions with contracting personnel and a preliminary review of documents provided sufficient evidence to justify further pursuit of the debt. Significantly reducing the number of contractor overpayments remains an important goal for DFAS. However, in the interim, management commitment and targeted efforts are critical to ensuring that DFAS Columbus collects or resolves delinquent debts by contractors that are unresponsive to the government’s demands. The inventory of such debts as of September 30, 2000, was almost $750 million. As illustrated by our work, increased commitment and effort can increase collections. Additionally, improved policies and procedures and strengthened internal controls are needed. These actions would help increase the speed and amount of collections from contractors, decrease administrative burdens and costs, and improve detection of potential fraud by contractors. In order to promote more effective and proactive debt collection, we recommend that the Director of DFAS, establish internal controls to ensure that the Debt Management Office validates debts, internal controls to ensure that debt files include all necessary supporting documentation, specific time frames for issuing the third demand letter and procedures to track issuance of these letters against the established time frames, requirements that Debt Management Office personnel engage in direct communications and interactions with contractor officials and others to actively identify and resolve issues related to debts, specific requirements and time frames for referring appropriate debts to the Treasury’s centralized debt collection programs, and procedures for utilizing the Defense Criminal Investigative Service and the Department of Justice when debts could involve criminal activities. In written comments on a draft of this report, DOD concurred with each of the recommendations and commented on the actions that have been or are to be taken. DOD commented that an update of the Debt Management Office’s operating procedures, to be completed by July 6, 2001, will include a checklist of all backup documentation required to be maintained in each new time frames for issuing demand letters, time frames and guidelines for initiating direct communications with contractors indebted to DOD, procedures and guidelines for identifying possible criminal activity and timely referral of such debts to the Defense Criminal Investigative Service or the Department of Justice. DOD also noted that follow-up reviews would be made to determine that debts are validated as required, checklists for required documentation are utilized, demand letters are issued within required time frames, and referrals to the Treasury’s centralized debt collections programs are timely. Further, DOD commented that a Contractor Debt System, which was installed on May 4, 2001, includes enhanced tools for managing debt cases and permits the Debt Management Office to pursue debts in a more effective and timely manner. DOD’s comments are responsive to our recommendations and, if effectively implemented, should improve debt collection efforts by DFAS. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies of this report to the DOD Under Secretary of Defense (Comptroller), the Director of the Defense Finance and Accounting Service, and interested congressional committees. Copies of this report will also be made available to others upon request. Please contact Gregory D. Kutz at (202) 512-9505 or Robert H. Hast at (202) 512-7455 if you have any questions. Major contributors to this report were John Ryan, Douglas Ferry, Ken Hill, and David Childress. Our work focused on debts and collection activities at the Debt Management Office, DFAS Columbus. To identify the legal requirements, policies, and procedures established for debt collection, we reviewed pertinent public laws, related federal regulations, the DOD Financial Management Regulation, and Debt Management Office procedures. To identify activities undertaken by the Debt Management Office, we reviewed debt files, demand letters, and related correspondence; reviewed previous DOD Inspector General and other reports on the Debt Management Office; and discussed debt collection activities with officials at DFAS, the Department of the Treasury’s Financial Management Service, the Department of Justice’s Civil Division, and the Defense Criminal Investigative Service. When warranted, we discussed contracting issues and questions related to the debts with DOD procurement officials. In selecting 10 specific cases for further review of debt validity, collection activities, and potential contractor fraud, we selected debts that were primarily caused by duplicate payments and in which the contractors had not responded to payment demands. At the time of our selection, the debts had been processed by the Accounts Receivable Branch and had been transferred or were being transferred to the Debt Management Office. Also, we selected debts that involved fairly current collection actions as opposed to some debts managed by the Office on which collection actions were not current. Finally, we selected debts of at least $10,000 so that the Department of Justice’s threshold for prosecution of criminal conduct would be met if such referrals were necessary. However, we did not select the debts of the top 100 DOD contractors and their subsidiaries because these debts could often be offset against other DOD payments to the contractors. In reviewing these 10 cases, we (1) examined the case files and the collection activities, (2) determined if the debts were valid, (3) contacted contractors, DOD procurement officials, and others to facilitate collection of valid debts, and (4) referred cases for further review if we identified potential fraudulent activities by contractors.
|
Improper payments are a long-standing problem throughout the government. The Department of Defense (DOD) has been overpaying contractors by hundreds of millions of dollars each year. For fiscal years 1994 through 1999, DOD contractors returned nearly $1.2 billion that the Defense Finance and Accounting Service (DFAS) had mistakenly paid them as a result of errors, such as paying the same invoice twice or misreading invoice amounts. Sometimes, however, the contractors do not promptly respond to government demands that the overpayments be returned. The Debt Management Office was created at the DFAS Columbus Center to deal with contractors that are unresponsive to the government's demands that overpayments be returned. GAO found that the Debt Management Office at DFAS Columbus is not effectively and proactively pursuing collections of debts assigned to it. Specifically, the Office is not (1) taking appropriate action to establish the validity of the debts that it receives for collection, (2) promptly issuing letters demanding payment, (3) actively communicating with contractors or resolving issues related to the debts, and (4) effectively using the Department of the Treasury's centralized debt collection programs to maximize collections and the Defense Criminal Investigative Service to pursue potential fraud. Ineffective and insufficient efforts by the Office are the results of both deficiencies in and lack of adherence to policies and procedures.
|
Credit card use has grown dramatically since the introduction of cards more than 5 decades ago. Cards were first introduced in 1950, when Diners Club established the first general-purpose charge card that allowed its cardholders to purchase goods and services from many different merchants. In the late 1950s, Bank of America began offering the first widely available general purpose credit card, which, unlike a charge card that requires the balance to be paid in full each month, allows a cardholder to make purchases up to a credit limit and pay the balance off over time. To increase the number of consumers carrying the card and to reach retailers outside of Bank of America’s area of operation, other banks were given the opportunity to license Bank of America’s credit card. As the network of banks issuing these credit cards expanded internationally, administrative operations were spun off into a separate entity that evolved into the Visa network. In contrast to credit cards, debit cards result in funds being withdrawn almost immediately from consumers’ bank accounts (as if they had a written a check instead). According to CardWeb.com, Inc., a firm that collects and analyzes data relating to the credit card industry, the number of times per month that credit or debit cards were used for purchases or other transactions exceeded 2.3 billion in May 2003, the last month for which the firm reported this data. The number of credit cards in circulation and the extent to which they are used has also grown dramatically. The range of goods and services that can be purchased with credit cards has expanded, with cards now being used to pay for groceries, health care, and federal and state income taxes. As shown in figure 1, in 2005, consumers held more than 691 million credit cards and the total value of transactions for which these cards were used exceeded $1.8 trillion. The largest issuers of credit cards in the United States are commercial banks, including many of the largest banks in the country. More than 6,000 depository institutions issue credit cards, but, over the past decade, the majority of accounts have become increasingly concentrated among a small number of large issuers. Figure 2 shows the largest bank issuers of credit cards by their total credit card balances outstanding as of December 31, 2004 (the most recent data available) and the proportion they represent of the overall total of card balances outstanding. TILA is the primary federal law pertaining to the extension of consumer credit. Congress passed TILA in 1968 to provide for meaningful disclosure of credit terms in order to enable consumers to more easily compare the various credit terms available in the marketplace, to avoid the uninformed use of credit, and to protect themselves against inaccurate and unfair credit billing and credit card practices. The regulation that implements TILA’s requirements is Regulation Z, which is administered by the Federal Reserve. Under Regulation Z, card issuers are required to disclose the terms and conditions to potential and existing cardholders at various times. When first marketing a card directly to prospective cardholders, written or oral applications or solicitations to open credit card accounts must generally disclose key information relevant to the costs of using the card, including the applicable interest rate that will be assessed on any outstanding balances and several key fees or other charges that may apply, such as the fee for making a late payment. In addition, issuers must provide consumers with an initial disclosure statement, which is usually a component of the issuer’s cardmember agreement, before the first transaction is made with a card. The cardmember agreement provides more comprehensive information about a card’s terms and conditions than would be provided as part of the application or a solicitation letter. In some cases, the laws of individual states also can affect card issuers’ operations. For example, although many credit card agreements permit issuers to make unilateral changes to the agreement’s terms and conditions, some state laws require that consumers be given the right to opt out of changes. However, as a result of the National Bank Act, and its interpretation by the U.S. Supreme Court, the interest and fees charged by a national bank on credit card accounts is subject only to the laws of the state in which the bank is chartered, even if its lending activities occur outside of its charter state. As a result, the largest banks have located their credit card operations in states with laws seen as more favorable for the issuer with respect to credit card lending. Various federal agencies oversee credit card issuers. The Federal Reserve has responsibility for overseeing issuers that are chartered as state banks and are also members of the Federal Reserve System. Many card issuers are chartered as national banks, which OCC supervises. Other regulators of bank issuers are FDIC, which oversees state-chartered banks with federally insured deposits that are not members of the Federal Reserve System; the Office of Thrift Supervision, which oversees federally chartered and state- chartered savings associations with federally insured deposits; or the National Credit Union Administration, which oversees federally-chartered and state-chartered credit unions whose member accounts are federally insured. As part of their oversight, these regulators review card issuers’ compliance with TILA and ensure that an institution’s credit card operations do not pose a threat to the institutions’ safety and soundness. The Federal Trade Commission generally has responsibility for enforcing TILA and other consumer protection laws for credit card issuers that are not depository institutions. Prior to about 1990, card issuers offered credit cards that featured an annual fee, a relatively high, fixed interest rate, and low penalty fees, compared with average rates and fees assessed in 2005. Over the past 15 years, typical credit cards offered by the largest U.S. issuers evolved to feature more complex pricing structures, including multiple interest rates that vary with market fluctuations. The largest issuers also increased the number, and in some cases substantially increased the amounts, of fees assessed on cardholders for violations of the terms of their credit agreement, such as making a late payment. Issuers said that these changes have benefited a greater number of cardholders, whereas critics contended that some practices unfairly increased cardholder costs. The largest six issuers provided data indicating that most of their cardholders had interest rates on their cards that were lower than the single fixed rates that prevailed on cards prior to the 1990s and that a small proportion of cardholders paid high penalty interest rates in 2005. In addition, although most cardholders did not appear to be paying penalty fees, about one-third of the accounts with these largest issuers paid at least one late fee in 2005. The interest rates, fees, and other practices that represent the pricing structure for credit cards have become more complex since the early 1990s. After first being introduced in the 1950s, for the next several decades, credit cards commonly charged a single fixed interest rate around 20 percent—as the annual percentage rate (APR)—which covered most of an issuer’s expenses associated with card use. Issuers also charged cardholders an annual fee, which was typically between $20 and $50 beginning in about 1980, according to a senior economist at the Federal Reserve Board. Card issuers generally offered these credit cards only to the most creditworthy U.S. consumers. According to a study of credit card pricing done by a member of the staff of one of the Federal Reserve Banks, few issuers in the late 1980s and early 1990s charged cardholders fees as penalties if they made late payments or exceeded the credit limit set by the issuer. Furthermore, these fees, when they were assessed, were relatively small. For example, the Federal Reserve Bank staff member’s paper notes that the typical late fee charged on cards in the 1980s ranged from $5 to $10. After generally charging just a single fixed interest rate before 1990, the largest issuers now apply multiple interest rates to a single card account balance and the level of these rates can vary depending on the type of transaction in which a cardholder engages. To identify recent pricing trends for credit cards, we analyzed the disclosures made to prospective and existing cardholders for 28 popular credit cards offered during 2003, 2004, and 2005 by the six largest issuers (based on credit card balances outstanding at the end of 2004). At that time, these issuers held almost 80 percent of consumer debt owed to credit card issuers and as much as 61 percent of total U.S. credit card accounts. As a result, our analysis of these 28 cards likely describes the card pricing structure and terms that apply to the majority of U.S. cardholders. However, our sample of cards did not include subprime cards, which typically have higher cost structures to compensate for the higher risks posed by subprime borrowers. We found that all but one of these popular cards assessed up to three different interest rates on a cardholder’s balance. For example, cards assessed separate rates on balances that resulted from the purchase or lease of goods and services, such as food, clothing, and home appliances; balances that were transferred from another credit card, which cardholders may do to consolidate balances across cards to take advantage of lower interest rates; and balances that resulted from using the card to obtain cash, such as a withdrawal from a bank automated teller machine. In addition to having separate rates for different transactions, popular credit cards increasingly have interest rates that vary periodically as market interest rates change. Almost all of the cards we analyzed charged variable rates, with the number of cards assessing these rates having increased over the most recent 3-year period. More specifically, about 84 percent of cards we reviewed (16 of 19 cards) assessed a variable interest rate in 2003, 91 percent (21 of 23 cards) in 2004, and 93 percent (25 of 27 cards) in 2005. Issuers typically determine these variable rates by taking the prevailing level of a base rate, such as the prime rate, and adding a fixed percentage amount. In addition, the issuers usually reset the interest rates on a monthly basis. Issuers appear to have assessed lower interest rates in recent years than they did prior to about 1990. Issuer representatives noted that issuers used to generally offer cards with a single rate of around 20 percent to their cardholders, and the average credit card rates reported by the Federal Reserve were generally around 18 percent between 1972 and 1990. According to the survey of credit card plans, conducted every 6 months by the Federal Reserve, more than 100 card issuers indicated that these issuers charged interest rates between 12 and 15 percent on average from 2001 to 2005. For the 28 popular cards we reviewed, the average interest rate that would be assessed for purchases was 12.3 percent in 2005, almost 6 percentage points lower than the average rates that prevailed until about 1990. We found that the range of rates charged on these cards was between about 8 and 19 percent in 2005. The average rate on these cards climbed slightly during this period, having averaged about 11.5 percent in 2003 and about 12 percent in 2004, largely reflecting the general upward movement in prime rates. Figure 3 shows the general decline in credit card interest rates, as reported by the Federal Reserve, between about 1991 and 2005 compared with the prime rate over this time. As these data show, credit card interest rates generally were stable regardless of the level of market interest rates until around 1996, at which time changes in credit card rates approximated changes in market interest rates. In addition, the spread between the prime rate and credit card rates was generally wider in the period before the 1980s than it has been since 1990, which indicates that since then cardholders are paying lower rates in terms of other market rates. Recently, many issuers have attempted to obtain new customers by offering low, even zero, introductory interest rates for limited periods. According to an issuer representative and industry analyst we interviewed, low introductory interest rates have been necessary to attract cardholders in the current competitive environment where most consumers who qualify for a credit card already have at least one. Of the 28 popular cards that we analyzed, 7 cards (37 percent) offered prospective cardholders a low introductory rate in 2003, but 20 (74 percent) did so in 2005—with most rates set at zero for about 8 months. According to an analyst who studies the credit card industry for large investors, approximately 25 percent of all purchases are made with cards offering a zero percent interest rate. Increased competition among issuers, which can be attributed to several factors, likely caused the reductions in credit card interest rates. In the early 1990s, new banks whose operations were solely focused on credit cards entered the market, according to issuer representatives. Known as monoline banks, issuer representatives told us these institutions competed for cardholders by offering lower interest rates and rewards, and expanded the availability of credit to a much larger segment of the population. Also, in 1988, new requirements were implemented for credit card disclosures that were intended to help consumers better compare pricing information on credit cards. These new requirements mandated that card issuers use a tabular format to provide information to consumers about interest rates and some fees on solicitations and applications mailed to consumers. According to issuers, consumer groups, and others, this format, which is popularly known as the Schumer box, has helped to significantly increase consumer awareness of credit card costs. According to a study authored by a staff member of a Federal Reserve Bank, consumer awareness of credit card interest rates has prompted more cardholders to transfer card balances from one issuer to another, further increasing competition among issuers. However, another study prepared by the Federal Reserve Board also attributes declines in credit card interest rates to a sharp drop in issuers’ cost of funds, which is the price issuers pay other lenders to obtain the funds that are then lent to cardholders. (We discuss issuers’ cost of funds later in this report.) Our analysis of disclosures also found that the rates applicable to balance transfers were generally the same as those assessed for purchases, but the rates for cash advances were often higher. Of the popular cards offered by the largest issuers, nearly all featured rates for balance transfers that were substantially similar to their purchase rates, with many also offering low introductory rates on balance transfers for about 8 months. However, the rates these cards assessed for obtaining a cash advance were around 20 percent on average. Similarly to rates for purchases, the rates for cash advances on most cards were also variable rates that would change periodically with market interest rates. Although featuring lower interest rates than in earlier decades, typical cards today now include higher and more complex fees than they did in the past for making late payments, exceeding credit limits, and processing returned payments. One penalty fee, commonly included as part of credit card terms, is the late fee, which issuers assess when they do not receive at least the minimum required payment by the due date indicated in a cardholder’s monthly billing statement. As noted earlier, prior to 1990, the level of late fees on cards generally ranged from $5 to $10. However, late fees have risen significantly. According to data reported by CardWeb.com, Inc., credit card late fees rose from an average of $12.83 in 1995 to $33.64 in 2005, an increase of over 160 percent. Adjusted for inflation, these fees increased about 115 percent on average, from $15.61 in 1995 to $33.64 in 2005. Similarly, Consumer Action, a consumer interest group that conducts an annual survey of credit card costs, found late fees rose from an average of $12.53 in 1995 to $27.46 in 2005, a 119 percent increase (or 80 percent after adjusting for inflation). Figure 4 shows trends in average late fee assessments reported by these two groups. In addition to increased fees a cardholder may be charged per occurrence, many cards created tiered pricing that depends on the balance held by the cardholder. Between 2003 and 2005, all but 4 of the 28 popular cards that we analyzed used a tiered fee structure. Generally, these cards included three tiers, with the following range of fees for each tier: $15 to $19 on accounts with balances of $100 or $250; $25 to $29 on accounts with balances up to about $1,000; and $34 to $39 on accounts with balances of about $1,000 or more. Tiered pricing can prevent issuers from assessing high fees to cardholders with comparatively small balances. However, data from the Federal Reserve’s Survey of Consumer Finances, which is conducted every 3 years, show that the median total household outstanding balance on U.S. credit cards was about $2,200 in 2004 among those that carried balances. When we calculated the late fees that would be assessed on holders of the 28 cards if they had the entire median balance on one card, the average late fee increased from $34 in 2003 to $37 in 2005, with 18 of the cards assessing the highest fee of $39 in 2005. Issuers also assess cardholders a penalty fee for exceeding the credit limit set by the issuer. In general, issuers assess over-limit fees when a cardholder exceeds the credit limit set by the card issuer. Similar to late fees, over-limit fees also have been rising and increasingly involve a tiered structure. According to data reported by CardWeb.com, Inc., the average over-limit fees that issuers assessed increased 138 percent from $12.95 in 1995 to $30.81 in 2005. Adjusted for inflation, average over-limit fees reported by CardWeb.com increased from $15.77 in 1995 to $30.81 in 2005, representing about a 95 percent increase. Similarly, Consumer Action found a 114 percent increase in this period (or 76 percent, after adjusting for inflation). Figure 5 illustrates the trend in average over-limit fees over the past 10 years from these two surveys. The cards we analyzed also increasingly featured tiered structures for over- limit fees, with 29 percent (5 of 17 cards) having such structures in 2003, and 53 percent (10 of 19 cards) in 2005. Most cards that featured tiered over-limit fees assessed the highest fee on accounts with balances greater than $1,000. But not all over-limit tiers were based on the amount of the cardholder’s outstanding balance. Some cards based the amount of the over-limit fee on other indicators, such as the amount of the cardholder’s credit limit or card type. For the six largest issuers’ popular cards with over-limit fees, the average fee that would be assessed on accounts that carried the median U.S. household credit card balance of $2,200 rose from $32 in 2003 to $34 in 2005. Among cards that assessed over-limit fees in 2005, most charged an amount between $35 and $39. Not all of the 28 popular large-issuer cards included over-limit fees and the prevalence of such fees may be declining. In 2003, 85 percent, or 17 of 20 cards, had such fees, but only 73 percent, or 19 of 26 cards, did in 2005. According to issuer representatives, they are increasingly emphasizing competitive strategies that seek to increase the amount of spending that their existing cardholders do on their cards as a way to generate revenue. This could explain a movement away from assessing over-limit fees, which likely discourage cardholders who are near their credit limit from spending. Cards also varied in when an over-limit fee would be assessed. For example, our analysis of the 28 popular large-issuer cards showed that, of the 22 cards that assessed over-limit fees, about two-thirds (14 of 22) would assess an over-limit fee if the cardholder’s balance exceeded the credit limit within a billing cycle, whereas the other cards (8 of 22) would assess the fee only if a cardholder’s balance exceeded the limit at the end of the billing cycle. In addition, within the overall limit, some of the cards had separate credit limits on the card for how much a cardholder could obtain in cash or transfer from other cards or creditors, before similarly triggering an over- limit fee. Finally, issuers typically assess fees on cardholders for submitting a payment that is not honored by the issuer or the cardholder’s paying bank. Returned payments can occur when cardholders submit a personal check that is written for an amount greater than the amount in their checking account or submit payments that cannot be processed. In our analysis of 28 popular cards offered by the six largest issuers, we found the average fee charged for such returned payments remained steady between 2003 and 2005 at about $30. Since 1990, issuers have appended more fees to credit cards. In addition to penalties for the cardholder actions discussed above, the 28 popular cards now often include fees for other types of transactions or for providing various services to cardholders. As shown in table 1, issuers assess fees for such services as providing cash advances or for making a payment by telephone. According to our analysis, not all of these fees were disclosed in the materials that issuers generally provide to prospective or existing cardholders. Instead, card issuers told us that they notified their customers of these fees by other means, such as telephone conversations. While issuers generally have been including more kinds of fees on credit cards, one category has decreased: most cards offered by the largest issuers do not require cardholders to pay an annual fee. An annual fee is a fixed fee that issuers charge cardholders each year they continue to own that card. Almost 75 percent of cards we reviewed charged no annual fee in 2005 (among those that did, the range was from $30 to $90). Also, an industry group representative told us that approximately 2 percent of cards featured annual fee requirements. Some types of cards we reviewed were more likely to apply an annual fee than others. For example, cards that offered airline tickets in exchange for points that accrue to a cardholder for using the card were likely to apply an annual fee. However, among the 28 popular cards that we reviewed, not all of the cards that offered rewards charged annual fees. Recently, some issuers have introduced cards without certain penalty fees. For example, one of the top six issuers has introduced a card that does not charge a late fee, over-limit fee, cash-advance fee, returned payment fee, or an annual fee. Another top-six issuer’s card does not charge the cardholder a late fee as long as one purchase is made during the billing cycle. However, the issuer of this card may impose higher interest rates, including above 30 percent, if the cardholder pays late or otherwise defaults on the terms of the card. Popular credit cards offered by the six largest issuers involve various issuer practices that can significantly affect the costs of using a credit card for a cardholder. These included practices such as raising a card’s interest rates in response to cardholder behaviors and how payments are allocated across balances. One of the practices that can significantly increase the costs of using typical credit cards is penalty pricing. Under this practice, the interest rate applied to the balances on a card automatically can be increased in response to behavior of the cardholder that appears to indicate that the cardholder presents greater risk of loss to the issuer. For example, representatives for one large issuer told us they automatically increase a cardholder’s interest rate if a cardholder makes a late payment or exceeds the credit limit. Card disclosure documents now typically include information about default rates, which represent the maximum penalty rate that issuers can assess in response to cardholders’ violations of the terms of the card. According to an industry specialist at the Federal Reserve, issuers first began the practice of assessing default interest rates as a penalty for term violations in the late 1990s. As of 2005, all but one of the cards we reviewed included default rates. The default rates were generally much higher than rates that otherwise applied to purchases, cash advances, or balance transfers. For example, the average default rate across the 28 cards was 27.3 percent in 2005—up from the average of 23.8 percent in 2003—with as many as 7 cards charging rates over 30 percent. Like many of the other rates assessed on these cards in 2005, default rates generally were variable rates. Increases in average default rates between 2003 and 2005 resulted from increases both in the prime rate, which rose about 2 percentage points during this time, and the average fixed amount that issuers added. On average, the fixed amount that issuers added to the index rate in setting default rate levels increased from about 19 percent in 2003 to 22 percent in 2005. Four of the six largest issuers typically included conditions in their disclosure documents that could allow the cardholder’s interest rate to be reduced from a higher penalty rate. For example some issuers would lower a cardholders’ rate for not paying late and otherwise abiding by the terms of the card for a period of 6 or 12 consecutive months after the default rate was imposed. However, at least one issuer indicated that higher penalty rates would be charged on existing balances even after six months of good behavior. This issuer assessed lower nonpenalty rates only on new purchases or other new balances, while continuing to assess higher penalty rates on the balance that existed when the cardholder was initially assessed a higher penalty rate. This practice may significantly increase costs to cardholders even after they’ve met the terms of their card agreement for at least six months. The specific conditions under which the largest issuers could raise a cardholder’s rate to the default level on the popular cards that we analyzed varied. The disclosures for 26 of the 27 cards that included default rates in 2005 stated that default rates could be assessed if the cardholders made late payments. However, some cards would apply such default rates only after multiple violations of card terms. For example, issuers of 9 of the cards automatically would increase a cardholder’s rates in response to two late payments. Additionally, for 18 of the 28 cards, default rates could apply for exceeding the credit limit on the card, and 10 cards could also impose such rates for returned payments. Disclosure documents for 26 of the 27 cards that included default rates also indicated that in response to these violations of terms, the interest rate applicable to purchases could be increased to the default rate. In addition, such violations would also cause issuers to increase the rates applicable to cash advances on 16 of the cards, as well as increase rates applicable to balance transfers on 24 of the cards. According to a paper by a Federal Reserve Bank researcher, some issuers began to increase cardholders’ interest rates in the early 2000s for actions they took with other creditors. According to this paper, these issuers would increase rates when cardholders failed to make timely payments to other creditors, such as other credit card issuers, utility companies, and mortgage lenders. Becoming generally known as “universal default,” consumer groups criticized these practices. In 2004, OCC issued guidance to the banks that it oversees, which include many of the largest card issuers, which addressed such practices. While OCC noted that the repricing might be an appropriate way for banks to manage their credit risk, they also noted that such practices could heighten a bank’s compliance and reputation risks. As a result, OCC urged national banks to fully and prominently disclose in promotional materials the circumstances under which a cardholder’s interest rates, fees, or other terms could be changed and whether the bank reserved the right to change these unilaterally. Around the time of this guidance, issuers generally ceased automatically repricing cardholders to default interest rates for risky behavior exhibited with other creditors. Of the 28 popular large issuer cards that we reviewed, three cards in 2005 included terms that would allow the issuer to automatically raise a cardholder’s rate to the default rate if they made a late payment to another creditor. Although the six largest U.S. issuers appear to have generally ceased making automatic increases to a default rate for behavior with other creditors, some continue to employ practices that allow them to seek to raise a cardholder’s interest rates in response to behaviors with other creditors. During our review, representatives of four of these issuers told us that they may seek to impose higher rates on a cardholder in response to behaviors related to other creditors but that such increases would be done as a change-in-terms, which can require prior notification, rather than automatically. Regulation Z requires that the affected cardholders be notified in writing of any such proposed changes in rate terms at least 15 days before such change becomes effective. In addition, under the laws of the states in which four of the six largest issuers are chartered, cardholders would have to be given the right to opt out of the change. However, issuer representatives told us that few cardholders exercise this right. The ability of cardholders to opt out of such increases also has been questioned. For example, one legal essay noted that some cardholders may not be able to reject the changed terms of their cards if the result would be a requirement to pay off the balance immediately. In addition, an association for community banks that provided comments to the Federal Reserve as part of the ongoing review of card disclosures noted that 15 days does not provide consumers sufficient time to make other credit arrangements if the new terms were undesirable. The way that issuers allocate payments across balances also can increase the costs of using the popular cards we reviewed. In this new credit environment where different balances on a single account may be assessed different interest rates, issuers have developed practices for allocating the payments cardholders make to pay down their balance. For 23 of the 28 popular larger-issuer cards that we reviewed, cardholder payments would be allocated first to the balance that is assessed the lowest rate of interest. As a result, the low interest balance would have to be fully paid before any of the cardholder’s payment would pay down balances assessed higher rates of interest. This practice can prolong the length of time that issuers collect finance charges on the balances assessed higher rates of interest. Additionally, some of the cards we reviewed use a balance computation method that can increase cardholder costs. On some cards, issuers have used a double-cycle billing method, which eliminates the interest-free period of a consumer who moves from nonrevolving to revolving status, according to Federal Reserve staff. In other words, in cases where a cardholder, with no previous balance, fails to pay the entire balance of new purchases by the payment due date, issuers compute interest on the original balance that previously had been subject to an interest-free period. This method is illustrated in figure 6. In our review of 28 popular cards from the six largest issuers, we found that two of the six issuers used the double-cycle billing method on one or more popular cards between 2003 and 2005. The other four issuers indicated they would only go back one cycle to impose finance charges. Representatives of issuers, consumer groups, and others we interviewed generally disagreed over whether the evolution of credit card pricing and other practices has been beneficial to consumers. However, data provided by the six largest issuers show that many of their active accounts did not pay finance charges and that a minority of their cardholders were affected by penalty charges in 2005. The movement towards risk-based pricing for cards has allowed issuers to offer better terms to some cardholders and more credit cards to others. Spurred by increased competition, many issuers have adopted risk-based pricing structures in which they assess different rates on cards depending on the credit quality of the borrower. Under this pricing structure, issuers have offered cards with lower rates to more creditworthy borrowers, but also have offered credit to consumers who previously would not have been considered sufficiently creditworthy. For example, about 70 percent of families held a credit card in 1989, but almost 75 percent held a card by 2004, according to the Federal Reserve Board’s Survey of Consumer Finances. Cards for these less creditworthy consumers have featured higher rates to reflect the higher repayment risk that such consumers represented. For example, the initial purchase rates on the 28 popular cards offered by the six largest issuers ranged from about 8 percent to 19 percent in 2005. According to card issuers, credit cards offer many more benefits to users than they did in the past. For example, according to the six largest issuers, credit cards are an increasingly convenient and secure form of payment. These issuers told us credit cards are accepted at more than 23 million merchants worldwide, can be used to make purchases or obtain cash, and are the predominant form of payment for purchases made on the Internet. They also told us that rewards, such as cash-back and airline travel, as well as other benefits, such as rental car insurance or lost luggage protection, also have become standard. Issuers additionally noted that credit cards are reducing the need for cash. Finally, they noted that cardholders typically are not responsible for loss, theft, fraud, or misuse of their credit cards by unauthorized users, and issuers often assist cardholders that are victims of identity theft. In contrast, according to some consumer groups and others, the newer pricing structures have resulted in many negative outcomes for some consumers. Some consumer advocates noted adverse consequences of offering credit, especially at higher interest rates, to less creditworthy consumers. For example, lower-income or young consumers, who do not have the financial means to carry credit card debt, could worsen their financial condition. In addition, consumer groups and academics said that various penalty fees could increase significantly the costs of using cards for some consumers. Some also argued that card issuers were overly aggressive in their assessment of penalty fees. For instance, a representative of a consumer group noted that issuers do not reject cardholders’ purchases during the sale authorization, even if the transaction would put the cardholder over the card’s credit limit, and yet will likely later assess that cardholder an over-limit fee and also may penalize them with a higher interest rate. Furthermore, staff for one banking regulator told us that they have received complaints from consumers who were assessed over-limit fees that resulted from the balance on their accounts going over their credit limit because their card issuer assessed them a late fee. At the same time, credit card issuers have incentives not to be overly aggressive with their assessment of penalty charges. For example, Federal Reserve representatives told us that major card issuers with long-term franchise value are concerned that their banks not be perceived as engaging in predatory lending because this could pose a serious risk to their brand reputation. As a result, they explained that issuers may be wary of charging fees that could be considered excessive or imposing interest rates that might be viewed as potentially abusive. In contrast, these officials noted that some issuers, such as those that focus on lending to consumers with lower credit quality, may be less concerned about their firm’s reputation and, therefore, more likely to charge higher fees. Controversy also surrounds whether higher fees and other charges were commensurate with the risks that issuers faced. Consumer groups and others questioned whether the penalty interest rates and fees were justifiable. For example, one consumer group questioned whether submitting a credit card payment one day late made a cardholder so risky that it justified doubling or tripling the interest rate assessed on that account. Also, as the result of concerns over the level of penalty fees being assessed by banks in the United Kingdom, a regulator there has recently announced that penalty fees greater than 12 pounds (about $23) may be challenged as unfair unless they can be justified by exceptional factors. Representatives of several of the issuers with whom we spoke told us that the levels of the penalty fees they assess generally were set by considering various factors. For example, they noted that higher fees help to offset the increased risk of loss posed by cardholders who pay late or engage in other negative behaviors. Additionally, they noted a 2006 study, which compared the assessment of penalty fees that credit card banks charged to bankruptcy rates in the states in which their cards were marketed, and found that late fee assessments were correlated with bankruptcy rates. Some also noted that increased fee levels reflected increased operating costs; for example, not receiving payments when due can cause the issuer to incur increased costs, such as those incurred by having to call cardholders to request payment. Representatives for four of the largest issuers also told us that their fee levels were influenced by what others in the marketplace were charging. Concerns also have been expressed about whether consumers adequately consider the potential effect of penalty interest rates and fees when they use their cards. For example, one academic researcher, who has written several papers about the credit card industry, told us that many consumers do not consider the effect of the costs that can accrue to them after they begin using a credit card. According to this researcher, many consumers focus primarily on the amount of the interest rate for purchases when deciding to obtain a new credit card and give less consideration to the level of penalty charges and rates that could apply if they were to miss a payment or violate some other term of their card agreement. An analyst that studies the credit card industry for large investors said that consumers can obtain low introductory rates but can lose them very easily before the introductory period expires. As noted previously, the average credit card interest rate assessed for purchases has declined from almost 20 percent, that prevailed until the late 1980s, to around 12 percent, as of 2005. In addition, the six largest issuers— whose accounts represent 61 percent of all U.S. accounts—reported to us that the majority of their cardholders in 2005 had cards with interest rates lower than the rate that generally applied to all cardholders prior to about 1990. According to these issuers, about 80 percent of active accounts were assessed interest rates below 20 percent as of December 31, 2005, with more than 40 percent having rates below 15 percent. However, the proportion of active accounts assessed rates below 15 percent declined since 2003, when 71 percent received such rates. According to issuer representatives, a greater number of active accounts were assessed higher interest rates in 2004 and 2005 primarily because of changes in the prime rate to which many cards’ variable rates are indexed. Nevertheless, cardholders today have much greater access to cards with lower interest rates than existed when all cards charged a single fixed rate. A large number of cardholders appear to avoid paying any significant interest charges. Many cardholders do not revolve a balance from month to month, but instead pay off the balance owed in full at the end of each month. Such cardholders are often referred to as convenience users. According to one estimate, about 42 percent of cardholders are convenience users. As a result, many of these cardholders availed themselves of the benefits of their cards without incurring any direct expenses. Similarly, the six largest issuers reported to us that almost half, or 48 percent, of their active accounts did not pay a finance charge in at least 10 months in 2005, similar to the 47 percent that did so in 2003 and 2004. Penalty interest rates and fees appear to affect a minority of the largest six issuers’ cardholders. No comprehensive sources existed to show the extent to which U.S. cardholders were paying penalty interest rates, but, according to data provided by the six largest issuers, a small proportion of their active accounts were being assessed interest rates above 25 percent— which we determined were likely to represent penalty rates. However, this proportion had more than doubled over a two-year period by having increased from 5 percent at the end of 2003 to 10 percent in 2004 and 11 percent in 2005. Although still representing a minority of cardholders, cardholders paying at least one type of penalty fee were a significant proportion of all cardholders. According to the six largest issuers, 35 percent of their active accounts had been assessed at least one late fee in 2005. These issuers reported that their late fee assessments averaged $30.92 per active account. Additionally, these issuers reported that they assessed over-limit fees on 13 percent of active accounts in 2005, with an average over-limit fee of $9.49 per active account. The disclosures that issuers representing the majority of credit card accounts use to provide information about the costs and terms of using credit cards had serious weaknesses that likely reduce their usefulness to consumers. These disclosures are the primary means under federal law for protecting consumers against inaccurate and unfair credit card practices. The disclosures we analyzed had weaknesses, such as presenting information written at a level too difficult for the average consumer to understand, and design features, such as text placement and font sizes, that did not conform to guidance for creating easily readable documents. When attempting to use these disclosures, cardholders were often unable to identify key rates or terms and often failed to understand the information in these documents. Several factors help explain these weaknesses, including outdated regulations and guidance. With the intention of improving the information that consumers receive, the Federal Reserve has initiated a comprehensive review of the regulations that govern credit card disclosures. Various suggestions have been made to improve disclosures, including testing them with consumers. While Federal Reserve staff have begun to involve consumers in their efforts, they are still attempting to determine the best form and content of any revised disclosures. Without clear, understandable information, consumers risk making poor choices about using credit cards, which could unnecessarily result in higher costs to use them. Having adequately informed consumers that spur competition among issuers is the primary way that credit card pricing is regulated in the United States. Under federal law, a national bank may charge interest on any loan at a rate permitted by the law of the state in which the bank is located. In 1978, the U.S. Supreme Court ruled that a national bank is “located” in the state in which it is chartered, and, therefore, the amount of the interest rates charged by a national bank are subject only to the laws of the state in which it is chartered, even if its lending activities occur elsewhere. As a result, the largest credit card issuing banks are chartered in states that either lacked interest rate caps or had very high caps from which they would offer credit cards to customers in other states. This ability to “export” their chartered states’ interest rates effectively removed any caps applicable to interest rates on the cards from these banks. In 1996, the U.S. Supreme Court determined that fees charged on credit extended by national banks are a form of interest, allowing issuers to also export the level of fees allowable in their state of charter to their customers nationwide, which effectively removed any caps on the level of fees that these banks could charge. In the absence of federal regulatory limitations on the rates and fees that card issuers can assess, the primary means that U.S. banking regulators have for influencing the level of such charges is by facilitating competition among issuers, which, in turn, is highly dependent on informed consumers. The Truth in Lending Act of 1968 (TILA) mandates certain disclosures aimed at informing consumers about the cost of credit. In approving TILA, Congress intended that the required disclosures would foster price competition among card issuers by enabling consumers to discern differences among cards while shopping for credit. TILA also states that its purpose is to assure that the consumer will be able to compare more readily the various credit terms available to him or her and avoid the uninformed use of credit. As authorized under TILA, the Federal Reserve has promulgated Regulation Z to carry out the purposes of TILA. The Federal Reserve, along with the other federal banking agencies, enforces compliance with Regulation Z with respect to the depository institutions under their respective supervision. In general, TILA and the accompanying provisions of Regulation Z require credit card issuers to inform potential and existing customers about specific pricing terms at specific times. For example, card issuers are required to make various disclosures when soliciting potential customers, as well as on the actual applications for credit. On or with card applications and solicitations, issuers generally are required to present pricing terms, including the interest rates and various fees that apply to a card, as well as information about how finance charges are calculated, among other things. Issuers also are required to provide cardholders with specified disclosures prior to the cardholder’s first transaction, periodically in billing statements, upon changes to terms and conditions pertaining to the account, and upon account renewal. For example, in periodic statements, which issuers typically provide monthly to active cardholders, issuers are required to provide detailed information about the transactions on the account during the billing cycle, including purchases and payments, and are to disclose the amount of finance charges that accrued on the cardholder’s outstanding balance and detail the type and amount of fees assessed on the account, among other things. In addition to the required timing and content of disclosures, issuers also must adhere to various formatting requirements. For example, since 1989, certain pricing terms must be disclosed in direct mail, telephone, and other applications and solicitations and presented in a tabular format on mailed applications or solicitations. This table, generally referred to as the Schumer box, must contain information about the interest rates and fees that could be assessed to the cardholder, as well as information about how finance charges are calculated, among other things. According to a Federal Reserve representative, the Schumer box is designed to be easy for consumers to read and use for comparing credit cards. According to a consumer group representative, an effective regulatory disclosure is one that stimulates competition among issuers; the introduction of the Schumer box in the late 1980s preceded the increased price competition in the credit card market in the early 1990s and the movement away from uniform credit card products. Not all fees that are charged by card issuers must be disclosed in the Schumer box. Regulation Z does not require that issuers disclose fees unrelated to the opening of an account. For example, according to the Official Staff Interpretations of Regulation Z (staff interpretations), nonperiodic fees, such as fees charged for reproducing billing statements or reissuing a lost or stolen card, are not required to be disclosed. Staff interpretations, which are compiled and published in a supplement to Regulation Z, are a means of guiding issuers on the requirements of Regulation Z. Staff interpretations also explain that various fees are not required in initial disclosure statements, such as a fee to expedite the delivery of a credit card or, under certain circumstances, a fee for arranging a single payment by telephone. However, issuers we surveyed told us they inform cardholders about these other fees at the time the cardholders request the service, rather than in a disclosure document. Although Congress authorized solely the Federal Reserve to adopt regulations to implement the purposes of TILA, other federal banking regulators, under their authority to ensure the safety and soundness of depository institutions, have undertaken initiatives to improve the credit card disclosures made by the institutions under their supervision. For example, the regulator of national banks, OCC, issued an advisory letter in 2004 alerting banks of its concerns regarding certain credit card marketing and account management practices that may expose a bank to compliance and reputation risks. One such practice involved the marketing of promotional interest rates and conditions under which issuers reprice accounts to higher interest rates. In its advisory letter, OCC recommended that issuers disclose any limits on the applicability of promotional interest rates, such as the duration of the rates and the circumstances that could shorten the promotional rate period or cause rates to increase. Additionally, OCC advised issuers to disclose the circumstances under which they could increase a consumer’s interest rate or fees, such as for failure to make timely payments to another creditor. The disclosures that credit card issuers typically provide to potential and new cardholders had various weaknesses that reduced their usefulness to consumers. These weaknesses affecting the disclosure materials included the typical grade level required to comprehend them, their poor organization and formatting of information, and their excessive detail and length. The typical credit card disclosure documents contained content that was written at a level above that likely to be understandable by many consumers. To assess the readability of typical credit card disclosures, we contracted with a private usability consultant to evaluate the two primary disclosure documents for four popular, widely-held cards (one each from four large credit card issuers). The two documents were (1) a direct mail solicitation letter and application, which must include information about the costs and fees associated with the card; and (2) the cardmember agreement that contains the full range of terms and conditions applicable to the card. Through visual inspection, we determined that this set of disclosures appeared representative of the disclosures for the 28 cards we reviewed from the six largest issuers that accounted for the majority of cardholders in the United States. To determine the level of education likely needed for someone to understand these disclosures, the usability consultant used computer software programs that applied three widely used readability formulas to the entire text of the disclosures. These formulas determined the readability of written material based on quantitative measures, such as average number of syllables in words or numbers of words in sentences. For more information about the usability consultant’s analyses, see appendix I. On the basis of the usability consultant’s analysis, the disclosure documents provided to many cardholders likely were written at a level too high for the average individual to understand. The consultant found that the disclosures on average were written at a reading level commensurate with about a tenth- to twelfth-grade education. According to the consultant’s analysis, understanding the disclosures in the solicitation letters would require an eleventh-grade level of reading comprehension, while understanding the cardmember agreements would require about a twelfth-grade education. A consumer advocacy group that tested the reading level needed to understand credit card disclosures arrived at a similar conclusion. In a comment letter to the Federal Reserve, this consumer group noted it had measured a typical passage from a change-in- terms notice on how issuers calculate finance charges using one of the readability formulas and that this passage required a twelfth-grade reading level. These disclosure documents were written such that understanding them required a higher reading level than that attained by many U.S. cardholders. For example, a nationwide assessment of the reading level of the U.S. population cited by the usability consultant indicated that nearly half of the adult population in the United States reads at or below the eighth-grade level. Similarly, to ensure that the information that public companies are required to disclose to prospective investors is adequately understandable, the Securities and Exchange Commission (SEC) recommends that such disclosure materials be written at a sixth- to eighth-grade level. In addition to the average reading level, certain portions of the typical disclosure documents provided by the large issuers required even higher reading levels to be understandable. For example, the information that appeared in cardmember agreements about annual percentage rates, grace periods, balance computation, and payment allocation methods required a minimum of a fifteenth-grade education, which is the equivalent of 3 years of college education. Similarly, text in the documents describing the interest rates applicable to one issuer’s card were written at a twenty- seventh-grade level. However, not all text in the disclosures required such high levels. For example, the consultant found that the information about fees that generally appeared in solicitation letters required only a seventh- and eighth-grade reading level to be understandable. Solicitation letters likely required lower reading levels to be understandable because they generally included more information in a tabular format than cardmember agreements. The disclosure documents the consultant evaluated did not use designs, including effective organizational structures and formatting, that would have made them more useful to consumers. To assess the adequacy of the design of the typical large issuer credit card solicitation letters and cardmember agreements, the consultant evaluated the extent to which these disclosures adhered to generally accepted industry standards for effective organizational structures and designs intended to make documents easy to read. In the absence of best practices and guidelines specifically for credit card disclosures, the consultant used knowledge of plain language, publications design guidelines, and industry best practices and also compared the credit card disclosure documents to the guidelines in the Securities and Exchange Commission’s plain English handbook. The usability consultant used these standards to identify aspects of the design of the typical card disclosure documents that could cause consumers using them to encounter problems. On the basis of this analysis, the usability consultant concluded that the typical credit card disclosures lacked effective organization. For example, the disclosure documents frequently placed pertinent information toward the end of sentences. Figure 7 illustrates an example taken from the cardmember agreement of one of the large issuers that shows that a consumer would need to read through considerable amounts of text before reaching the important information, in this case the amount of the annual percentage rate (APR) for purchases. Best practices would dictate that important information—the amount of the APR—be presented first, with the less important information—the explanation of how the APR is determined—placed last. In addition, the disclosure documents often failed to group relevant information together. Although one of the disclosure formats mandated by law—the Schumer box—has been praised as having simplified the presentation of complex information, our consultant observed that the amount of information that issuers typically presented in the box compromised the benefits of using a tabular format. Specifically, the typical credit card solicitation letter, which includes a Schumer box, may be causing difficulties for consumers because related information generally is not grouped appropriately, as shown in figure 8. As shown in figure 8, information about the APR that would apply to purchases made with the card appeared in three different locations. The first row includes the current prevailing rate of the purchase APR; text that describes how the level of the purchase APR could vary according to an underlying rate, such as the prime rate, is included in the third row; and text describing how the issuer determines the level of this underlying rate is included in the footnotes. According to the consultant, grouping such related information together likely would help readers to more easily understand the material. In addition, of the four issuers whose materials were analyzed, three provided a single document with all relevant information in a single cardmember agreement, but one issuer provided the information in separate documents. For example, this issuer disclosed specific information about the actual amount of rates and fees in one document and presented information about how such rates were determined in another document. According to the readability consultant, disclosures in multiple documents can be more difficult for the reader to use because they may require more work to find information. Formatting weaknesses also likely reduced the usefulness of typical credit card disclosure documents. The specific formatting issues were as follows: Font sizes. According to the usability consultant’s analysis, many of the disclosure documents used font sizes that were difficult to read and could hinder consumers’ ability to find information. For example, the consultant found extensive use of small and condensed typeface in cardmember agreements and in footnotes in solicitation materials when best practices would suggest using a larger, more legible font size. Figure 9 contains an illustration of how the disclosures used condensed text that makes the font appear smaller than it actually is. Multiple consumers and consumer groups who provided comments to the Federal Reserve noted that credit card disclosures were written in a small print that reduces a consumer’s ability to read or understand the document. For example, a consumer who provided comments to the Federal Reserve referred to the text in card disclosures as “mice type.” This example also illustrates how notes to the text, which should be less important, were the same size and thus given the same visual emphasis as the text inside the box. Consumers attempting to read such disclosures may have difficulty determining which information is more important. Ineffective font placements. According to the usability consultant, some issuers’ efforts to distinguish text using different font types sometimes had the opposite effect. The consultant found that the disclosures from all four issuers emphasized large amounts of text with all capital letters and sometimes boldface. According to the consultant, formatting large blocks of text in capitals makes it harder to read because the shapes of the words disappear, forcing the reader to slow down and study each letter (see figure 10). In a comment letter to the Federal Reserve, an industry group recommended that boldfaced or capitalized text should be used discriminately, because in its experience, excessive use of such font types caused disclosures to lose all effectiveness. SEC’s guidelines for producing clear disclosures contain similar suggestions. Selecting text for emphasis. According to the usability consultant, most of the disclosure documents unnecessarily emphasized specific terms. Inappropriate emphasis of such material could distract readers from more important messages. Figure 11 contains a passage from one cardmember agreement that the readability consultant singled out for its emphasis of the term “periodic finance charge,” which is repeated six times in this example. According to the consultant, the use of boldface and capitalized text calls attention to the word, potentially requiring readers to work harder to understand the entire passage’s message. Use of headings. According to the usability consultant, disclosure documents from three of the four issuers analyzed contained headings that were difficult to distinguish from surrounding text. Headings, according to the consultant, provide a visual hierarchy to help readers quickly identify information in a lengthy document. Good headers are easy to identify and use meaningful labels. Figure 12 illustrates two examples of how the credit card disclosure documents failed to use headings effectively. In the first example, the headings contained an unnecessary string of numbers that the consultant found would make locating a specific topic in the text more difficult. As a result, readers would need to actively ignore the string of numbers until the middle of the line to find what they wanted. The consultant noted that such numbers might be useful if this document had a table of contents that referred to the numbers, but it did not. In the second example, the consultant noted that a reader’s ability to locate information using the headings in this document was hindered because the headings were not made more visually distinct, but instead were aligned with other text and printed in the same type size as the text that followed. As a result, these headings blended in with the text. Furthermore, the consultant noted that because the term “Annual Percentage Rates” was given the same visual treatment as the two headings in the example, finding headings quickly was made even more difficult. In contrast, figure 12 also shows an example that the consultant identified in one of the disclosure documents that was an effective use of headings. Presentation techniques. According to the usability consultant, the disclosure documents analyzed did not use presentation techniques, such as tables, bulleted lists, and graphics, that could help to simplify the presentation of complicated concepts, especially in the cardmember agreements. Best practices for document design suggest using tables and bulleted lists to simplify the presentation of complex information. Instead, the usability consultant noted that all the cardmember agreements reviewed almost exclusively employed undifferentiated blocks of text, potentially hindering clear communication of complex information, such as the multiple-step procedures issuers use for calculating a cardholder’s minimum required payment. Figure 13 below presents two samples of text from different cardmember agreements describing how minimum payments are calculated. According to the consultant, the sample that used a bulleted list was easier to read than the one formatted as a paragraph. Also, an issuer stated in a letter to the Federal Reserve that their consumers have welcomed the issuer’s use of bullets to format information, emphasizing the concept that the visual layout of information either facilitates or hinders consumer understanding. The content of typical credit card disclosure documents generally was overly complex and presented in too much detail, such as by using unfamiliar or complex terms to describe simple concepts. For example, the usability consultant identified one cardmember agreement that used the term “rolling consecutive twelve billing cycle period” instead of saying “over the course of the next 12 billing statements” or “next 12 months”—if that was appropriate. Further, a number of consumers, consumer advocacy groups, and government and private entities that have provided comments to the Federal Reserve agreed that typical credit card disclosures are written in complex language that hinders consumers’ understanding. For example, a consumer wrote that disclosure documents were “loaded with booby traps designed to trip consumers, and written in intentionally impenetrable and confusing language.” One of the consumer advocacy groups stated the disclosures were “full of dense, impenetrable legal jargon that even lawyers and seasoned consumer advocates have difficulty understanding.” In addition, the consultant noted that many of the disclosures, including solicitation letters and cardmember agreements, contained overly long and complex sentences that increase the effort a reader must devote to understanding the text. Figure 14 contains two examples of instances in which the disclosure documents used uncommon words and phrases to express simple concepts. In addition, the disclosure documents regularly presented too much or irrelevant detail. According to the usability consultant’s analysis, the credit card disclosures often contained superfluous information. For example, figure 15 presents an example of text from one cardmember agreement that described the actions the issuer would take if its normal source for the rate information used to set its variable rates—The Wall Street Journal—were to cease publication. Including such an arguably unimportant detail lengthens and makes this disclosure more complex. According to SEC best practices for creating clear disclosures, disclosure documents are more effective when they adhere to the rule that less is more. By omitting unnecessary details from disclosure documents, the usability consultant indicated that consumers would be more likely to read and understand the information they contain. Many of the credit cardholders that were tested and interviewed as part of our review exhibited confusion over various fees, practices, and other terms that could affect the cost of using their credit cards. To understand how well consumers could use typical credit card disclosure documents to locate and understand information about card fees and other practices, the usability consultant with whom we contracted used a sample of cardholders to perform a usability assessment of the disclosure documents from the four large issuers. As part of this assessment, the consultant conducted one-on-one sessions with a total of 12 cardholders so that each set of disclosures, which included a solicitation letter and a cardmember agreement, was reviewed by 3 cardholders. Each of these cardholders were asked to locate information about fee levels and rates, the circumstances in which they would be imposed, and information about changes in card terms. The consultant also tested the cardholders’ ability to explain various practices used by the issuer, such as the process for determining the amount of the minimum monthly payment, by reading the disclosure documents. Although the results of the usability testing cannot be used to make generalizations about all cardholders, the consultant selected cardholders based on the demographics of the U.S. adult population, according to age, education level, and income, to ensure that the cardholders tested were representative of the general population. In addition, as part of this review, we conducted one-on-one interviews with 112 cardholders to learn about consumer behavior and knowledge about various credit card terms and practices. Although we also selected these cardholders to reflect the demographics of the U.S. adult population, with respect to age, education level, and income, the results of these interviews cannot be generalized to the population of all U.S. cardholders. Based on the work with consumers, specific aspects of credit card terms that apparently were not well understood included: Default interest rates. Although issuers can penalize cardholders for violating the terms of the card, such as by making late payments or by increasing the interest rates in effect on the cardholder’s account to rates as high as 30 percent or more, only about half of the cardholders that the usability consultant tested were able to use the typical credit card disclosure documents to successfully identify the default rate and the circumstances that would trigger rate increases for these cards. In addition, the usability consultant observed the cardholders could not identify this information easily. Many also were unsure of their answers, especially when rates were expressed as a “prime plus” number, indicating the rate varied based on the prime rate. Locating information in the typical cardmember agreement was especially difficult for cardholders, as only 3 of 12 cardholders were able to use such documents to identify the default interest rate applicable to the card. More importantly, only about half of the cardholders tested using solicitation letters were able to accurately determine what actions could potentially cause the default rate to be imposed on these cards. Other penalty rate increases. Although card issuers generally reserve the right to seek to raise a cardholder’s rate in other situations, such as when a cardholder makes a late payment to another issuer’s credit card, (even if the cardholder has not defaulted on the cardmember agreement), about 71 percent of the 112 cardholders we interviewed were unsure or did not believe that issuers could increase their rates in such a case. In addition, about two-thirds of cardholders we interviewed were unaware or did not believe that a drop in their credit score could cause an issuer to seek to assess higher interest rates on their account. Late payment fees. According to the usability assessment, many of the cardholders had trouble using the disclosure documents to correctly identify what would occur if a payment were to be received after the due date printed in the billing statement. For example, nearly half of the cardholders were unable to use the cardmember agreement to determine whether a payment would be considered late based on the date the issuer receives the payment or the date the payment was mailed or postmarked. Additionally, the majority of the 112 cardholders we interviewed also exhibited confusion over late fees: 52 percent indicated that they have been surprised when their card company applied a fee or penalty to their account. Using a credit card to obtain cash. Although the cardholders tested by the consultant generally were able to use the disclosures to identify how a transaction fee for a cash advance would be calculated, most were unable to accurately use this information to determine the transaction fee for withdrawing funds, usually because they neglected to consider the minimum dollar amount, such as $5 or $10, that would be assessed. Grace periods. Almost all 12 cardholders in the usability assessment had trouble using the solicitation letters to locate and define the grace period, the period during which the a cardholder is not charged interest on a balance. Instead, many cardholders incorrectly indicated that the grace period was instead when their lower, promotional interest rates would expire. Others incorrectly indicated that it was the amount of time after the monthly bill’s due date that a cardholder could submit a payment without being charged a late fee. Balance computation method. Issuers use various methods to calculate interest charges on outstanding balances, but only 1 of the 12 cardholders the usability consultant tested correctly described average daily balance, and none of the cardholders were able to describe two- cycle average daily balance accurately. At least nine letters submitted to the Federal Reserve in connection with its review of credit card disclosures noted that few consumers understand balance computation methods as stated in disclosure documents. Perhaps as a result of weaknesses previously described, cardholders generally avoid using the documents issuers provide with a new card to improve their understanding of fees and practices. For example, many of the cardholders interviewed as part of this report noted that the length, format, and complexity of disclosures led them to generally disregard the information contained in them. More than half (54 percent) of the 112 cardholders we interviewed indicated they read the disclosures provided with a new card either not very closely or not at all. Instead, many cardholders said they would call the issuer’s customer service representatives for information about their card’s terms and conditions. Cardholders also noted that the ability of issuers to change the terms and conditions of a card at any time led them to generally disregard the information contained in card disclosures. Regulation Z allows card issuers to change the terms of credit cards provided that issuers notify cardholders in writing within 15 days of the change. As a result, the usability consultant observed some participants were dismissive of the information in the disclosure documents because they were aware that issuers could change anything. With liability concerns and outdated regulatory requirements seemingly explaining the weaknesses in card disclosures, the Federal Reserve has begun efforts to review its requirements for credit card disclosures. Industry participants have advocated various ways in which the Federal Reserve can act to improve these disclosures and otherwise assist cardholders. Several factors may help explain why typical credit card disclosures exhibit weaknesses that reduce their usefulness to cardholders. First, issuers make decisions about the content and format of their disclosures to limit potential legal liability. Issuer representatives told us that the disclosures made in credit card solicitations and cardmember agreements are written for legal purposes and in language that consumers generally could not understand. For example, representatives for one large issuer told us they cannot always state information in disclosures clearly because the increased potential that simpler statements would be misinterpreted would expose them to litigation. Similarly, a participant of a symposium on credit card disclosures said that disclosures typically became lengthier after the issuance of court rulings on consumer credit issues. Issuers can attempt to reduce the risk of civil liability based on their disclosures by closely following the formats that the Federal Reserve has provided in its model forms and other guidance. According to the regulations that govern card disclosures, issuers acting in good faith compliance with any interpretation issued by a duly authorized official or employee of the Federal Reserve are afforded protection from liability. Second, the regulations governing credit card disclosures have become outdated. As noted earlier in this report, TILA and Regulation Z that implements the act’s provisions are intended to ensure that consumers have adequate information about potential costs and other applicable terms and conditions to make appropriate choices among competing credit cards. The most recent comprehensive revisions to Regulation Z’s open-end credit rules occurred in 1989 to implement the provisions of the Fair Credit and Charge Card Act. As we have found, the features and cost structures of credit cards have changed considerably since then. An issuer representative told us that current Schumer box requirements are not as useful in presenting the more complicated structures of many current cards. For example, they noted that it does not easily accommodate information about the various cardholder actions that could trigger rate increases, which they argued is now important information for consumers to know when shopping for credit. As a result, some of the specific requirements of Regulation Z that are intended to ensure that consumers have accurate information instead may be diminishing the usefulness of these disclosures. Third, the guidance that the Federal Reserve provides issuers may not be consistent with guidelines for producing clear, written documents. Based on our analysis, many issuers appear to adhere to the formats and model forms that the Federal Reserve staff included in the Official Staff Interpretations of Regulation Z, which are prepared to help issuers comply with the regulations. For example, the model forms present text about how rates are determined in footnotes. However, as discussed previously, not grouping related information undermines the usability of documents. The Schumer box format requires a cardholder to look in several places, such as in multiple rows in the table and in notes to the table, for information about related aspects of the card. Similarly, the Federal Reserve’s model form for the Schumer box recommends that the information about the transaction fee and interest rate for cash advances be disclosed in different areas. Finally, the way that issuers have implemented regulatory guidance may have contributed to the weaknesses typical disclosure materials exhibited. For example, in certain required disclosures, the terms “annual percentage rate” and “finance charge,” when used with a corresponding amount or percentage rate, are required to be more conspicuous than any other required disclosures. Staff guidance suggests that such terms may be made more conspicuous by, for example, capitalizing these terms when other disclosures are printed in lower case or by displaying these terms in larger type relative to other disclosures, putting them in boldface print or underlining them. Our usability consultant’s analysis found that card disclosure documents that followed this guidance were less effective because they placed an inappropriate emphasis on terms. As shown previously in figure 11, the use of bold and capital letters to emphasize the term “finance charge” in the paragraph unnecessarily calls attention to that term, potentially distracting readers from information that is more important. The excerpt shown in figure 11 is from an initial disclosure document which, according to Regulation Z, is subject to the “more conspicuous” rule requiring emphasis of the terms “finance charge” and “annual percentage rate.” With the intention of improving credit card disclosures, the Federal Reserve has begun efforts to develop new regulations. According to its 2004 notice seeking public comments on Regulation Z, the Federal Reserve hopes to address the length, complexity, and superfluous information of disclosures and produce new disclosures that will be more useful in helping consumers compare credit products. After the passage of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (Bankruptcy Act) in October of that year, which included amendments to TILA, the Federal Reserve sought additional comments from the public to prepare to implement new disclosure requirements including disclosures intended to advise consumers of the consequences of making only minimum payments on credit cards. According to Federal Reserve staff, new credit card disclosure regulations may not be in effect until sometime in 2007 or 2008 because of the time required to conduct consumer testing, modify the existing regulations, and then seek comment on the revised regulation. Industry participants and others have provided input to assist the Federal Reserve in this effort. Based on the interviews we conducted, documents we reviewed, and our analysis of the more than 280 comment letters submitted to the Federal Reserve, issuers, consumer groups, and others provided various suggestions to improve the content and format of credit card disclosures, including: Reduce the amount of information disclosed. Some industry participants said that some of the information currently presented in the Schumer box could be removed because it is too complicated to disclose meaningfully or otherwise lacks importance compared to other credit terms that are arguably more important when choosing among cards. Such information included the method for computing balances and the amount of the minimum finance charge (the latter because it is typically so small, about 50 cents in 2005). Provide a shorter document that summarizes key information. Some industry participants advocated that all key information that could significantly affect a cardholder’s costs be presented in a short document that consumers could use to readily compare across cards, with all other details included in a longer document. For example, although the Schumer box includes several key pieces of information, it does not include other information that could be as important for consumer decisions, such as what actions could cause the issuer to raise the interest rate to the default rate. Revise disclosure formats to improve readability. Various suggestions were made to improve the readability of card disclosures, including making more use of tables of contents, making labels and headings more prominent, and presenting more information in tables instead of in text. Disclosure documents also could use consistent wording that could allow for better comparison of terms across cards. Some issuers and others also told us that the new regulations should allow for more flexibility in card disclosure formats. Regulations mandating formats and font sizes were seen as precluding issuers from presenting information in more effective ways. For example, one issuer already has conducted market research and developed new formats for the Schumer box that it says are more readable and contain new information important to choosing cards in today’s credit card environment, such as cardholder actions that would trigger late fees or penalty interest rate increases. In addition to suggestions about content, obtaining the input of consumers, and possibly other professionals, was also seen as an important way to make any new disclosures more useful. For example, participants in a Federal Reserve Bank symposium on credit card disclosures recommended that the Federal Reserve obtain the input of marketers, researchers, and consumers as part of developing new disclosures. OCC staff suggested that the Federal Reserve also employ qualitative research methods such as in- depth interviews with consumers and others and that it conduct usability testing. Consumer testing can validate the effectiveness or measure the comprehension of messages and information, and detect document design problems. Many issuers are using some form of market research to test their disclosure materials and have advocated improving disclosures by seeking the input of marketers, researchers, and consumers. SEC also has recently used consumer focus groups to test the format of new disclosures related to mutual funds. According to an SEC staff member who participated in this effort, their testing provided them with valuable information on what consumers liked and disliked about some of the initial forms that the regulator had drafted. In some cases, they learned that information that SEC staff had considered necessary to include was not seen as important by consumers. As a result, they revised the formats for these disclosures substantially to make them simpler and may use graphics to present more information rather than text. According to Federal Reserve staff, they have begun to involve consumers in the development of new credit card disclosures. According to Federal Reserve staff, they have already conducted some consumer focus groups. In addition, they have contracted with a design consultant and a market research firm to help them develop some disclosure formats that they can then use in one-on-one testing with consumers. However, the Federal Reserve staff told us they recognize the challenge of designing disclosures that include all key information in a clear manner, given the complexity of credit card products and the different ways in which consumers use credit cards. The number of consumers filing for bankruptcy has risen more than six- fold over the past 25 years, and various factors have been cited as possible explanations. While some researchers have pointed to increases in total debt or credit card debt in particular, others found that debt burdens and other measures of financial distress had not increased and thus cite other factors, such as a general decline in the stigma of going bankrupt or the potentially increased costs of major life events such as health problems or divorce. Some critics of the credit card industry have cited penalty interest and fees as leading to increased financial distress; however, no comprehensive data existed to determine the extent to which these charges were contributing to consumer bankruptcies. Data provided by the six largest card issuers indicated that unpaid interest and fees represented a small portion of the amounts owed by cardholders that filed for bankruptcy; however, these data alone were not sufficient to determine any relationship between the charges and bankruptcies filed by cardholders. According to U.S. Department of Justice statistics, consumer bankruptcy filings generally rose steadily from about 287,000 in 1980 to more than 2 million as of December 31, 2005, which represents about a 609 percent increase over the last 25 years. Researchers have cited a number of factors as possible explanations for the long-term trend. The total debt of American households is composed of mortgages on real estate, which accounts for about 80 percent of the total, and consumer credit debt, which includes revolving credit, such as balances owed on credit cards, and nonrevolving credit, primarily consisting of auto loans. According to Federal Reserve statistics, consumers’ use of debt has expanded over the last 25 years, increasing more than sevenfold from $1.4 trillion in 1980 to about $11.5 trillion in 2005. Some researchers pointed to this rise in overall indebtedness as contributing to the rise in bankruptcies. For example, a 2000 Congressional Budget Office summary of bankruptcy research noted that various academic studies have argued that consumer bankruptcies are either directly or indirectly caused by heavy consumer indebtedness. Rather than total debt, some researchers and others argue that the rise in bankruptcies is related to the rise in credit card debt in particular. According to the Federal Reserve’s survey of consumer debt, the amount of credit card debt reported as outstanding rose from about $237 billion to more than $802 billion—a 238 percent increase between 1990 and 2005. One academic researcher noted that the rise in bankruptcies and charge- offs by banks in credit card accounts grew along with the increase in credit card debt during the 1973 to 1996 period he examined. According to some consumer groups, the growth of credit card debt is one of the primary explanations of the increased prevalence of bankruptcies in the United States. For example, one group noted in a 2005 testimony before Congress that growth of credit card debt—particularly among lower and moderate income households, consumers with poor credit scores, college students, older Americans, and minorities—was contributing to the rise in bankruptcies. However, other evidence indicates that increased indebtedness has not severely affected the financial condition of U.S. households in general. For example: Some researchers note that the ability of households to make payments on debt appears to be keeping pace. For example, total household debt levels as a percentage of income has remained relatively constant since the 1980s. According to the Federal Reserve, the aggregate debt burden ratio—which covers monthly aggregate required payments of all households on mortgage debt and both revolving and non-revolving consumer loans relative to the aggregate monthly disposable income of all households—for U.S. households has been above 13 percent in the last few years but generally fluctuated between 11 percent and 14 percent from 1990 to 2005, similar to the levels observed during the 1980s. According to one researcher, although the debt burden ratio has risen since the 1980s, the increase has been gradual and therefore cannot explain the six-fold increase in consumer bankruptcy filings over the same period. Credit card debt remains a small portion of overall household debt, even among households with the lowest income levels. According to the Federal Reserve, credit card balances as a percentage of total household debt have declined from 3.9 percent of total household debt in 1995 to just 3.0 percent as of 2004. The proportion of households that could be considered to be in financial distress does not appear to be increasing significantly. According to the Federal Reserve Board’s Survey of Consumer Finances, the proportion of households that could be considered to be in financial distress— those that report debt-to-income ratios exceeding 40 percent and that have had at least one delinquent payment within the last 60 days—was relatively stable between 1995 and 2004. Further, the proportion of the lowest-income households exhibiting greater levels of distress was lower in 2004 than it was in the 1990s. With the effect of increased debt unclear, some researchers say that other factors may better explain the surge in consumer bankruptcy filings over the past 25 years. For example, the psychological stigma of declaring bankruptcy may have lessened. One academic study examined a range of variables that measured the credit risk (risk of default) of several hundred thousand credit card accounts and found that because the bankruptcy rate for the accounts was higher than the credit-risk variables could explain, the higher rate must be the result of a reduced level of stigma associated with filing. However, others have noted that reliably measuring stigma is difficult. Some credit card issuers and other industry associations also have argued that the pre-2005 bankruptcy code was too debtor-friendly and created an incentive for consumers to borrow beyond the ability to repay and file for bankruptcy. In addition to the possibly reduced stigma, some academics, consumer advocacy groups, and others noted that the normal life events that reduce incomes or increase expenses for households may have a more serious effect today. Events that can reduce household incomes include job losses, pay cuts, or having a full-time position converted to part-time work. With increasing health care costs, medical emergencies can affect household expenses and debts more significantly than in the past, and, with more families relying on two incomes, so can divorces. As a result, one researcher explains that while these risks have always faced households, their effect today may be more severe, which could explain higher bankruptcy rates. Researchers who assert that life events are the primary explanation for bankruptcy filings say that the role played by credit cards can vary. They acknowledged that credit card debt can be a contributing factor to a bankruptcy filing if a person’s income is insufficient to meet all financial obligations, including payments to credit card issuers. For example, some individuals experiencing an adverse life event use credit cards to provide additional funds to satisfy their financial obligations temporarily but ultimately exhaust their ability to meet all obligations. However, because the number of people that experience financially troublesome life events likely exceeds the number of people who file for bankruptcy, credit cards in other cases may serve as a critical temporary source of funding they needed to avert a filing until that person’s income recovers or expenses diminish. (Appendix II provides additional detail about the factors that may have affected the rise in consumer bankruptcy filings and its relationship with credit card debt.) With very little information available on the financial condition of individuals filing for bankruptcy, assessing the role played by credit card debt, including penalty interest and fees, is difficult. According to Department of Justice officials who oversee bankruptcy trustees in most bankruptcy courts, the documents submitted as part of a bankruptcy filing show the total debt owed to each card issuer but not how much of this total consists of unpaid principal, interest, or fees. Similarly, these Justice officials told us that the information that credit card issuers submit when their customers reaffirm the debts owed to them—known as proofs of claim—also indicate only the total amount owed. Likewise, the amount of any penalty interest or fees owed as part of an outstanding credit card balance is generally not required to be specified when a credit card issuer seeks to obtain a court judgment that would require payment from a customer as part of a collection case. Although little comprehensive data exist, some consumer groups and others have argued that penalty interest and fees materially harm the financial condition of some cardholders, including those that later file for bankruptcy. Some researchers who study credit card issues argue that high interest rates (applicable to standard purchases) for higher risk cardholders, who are also frequently lower-income households, along with penalty and default interest rates and fees, contribute to more consumer bankruptcy filings. Another researcher who has studied issues relating to credit cards and bankruptcy asserted that consumers focus too much on the introductory purchase interest rates when shopping for credit cards and, as a result, fail to pay close attention to penalty interest rates, default clauses, and other fees that may significantly increase their costs later. According to this researcher, it is doubtful that penalty fees (such as late fees and over-limit fees) significantly affect cardholders’ debt levels, but accrued interest charges—particularly if a cardholder is being assessed a high penalty interest rate—can significantly worsen a cardholder’s financial distress. Some consumer advocacy groups and academics say that the credit card industry practice of raising cardholder interest rates for default or increased risky behavior likely has contributed to some consumer bankruptcy filings. According to these groups, cardholders whose rates are raised under such practices can find it more difficult to reduce their credit card debt and experience more rapid declines in their overall financial conditions as they struggle to make the higher payments that such interest rates may entail. As noted earlier in this report, card issuers have generally ceased practicing universal default, although representatives for four of the six issuers told us that they might increase their cardholder’s rates if they saw indications that the cardholder’s risk has increased, such as how well they were making payments to other creditors. In such cases, the card issuers said they notify the cardholders in advance, by sending a change in terms notice, and provide an option to cancel the account but keep the original terms and conditions while paying off the balance. Some organizations also have criticized the credit card industry for targeting lower-income households that they believe may be more likely to experience financial distress or file for bankruptcy. One of the criticisms these organizations have made is that credit card companies have been engaging in bottom-fishing by providing increasing amounts of credit to riskier lower-income households that, as a result, may incur greater levels of indebtedness than appropriate. For example, an official from one consumer advocacy group testified in 2005 that card issuers target lower- income and minority households and that this democratization of credit has had serious negative consequences for these households, placing them one financial emergency away from having to file for bankruptcy. Some consumer advocacy group officials and academics noted that card issuers market high-cost cards, with higher interest rates and fees, to customers with poor credit histories—called subprime customers—including some just coming out of bankruptcy. However, as noted earlier, Federal Reserve survey data indicate that the proportion of lower-income households— those with incomes below the fortieth percentile—exhibiting financial distress has not increased since 1995. In addition, in a June 2006 report that the Federal Reserve Board prepared for Congress on the relationship between credit cards and bankruptcy, it stated that credit card issuers do not solicit customers or extend credit to them indiscriminately or without assessing their ability to repay debt as issuers review all received applications for risk factors. In addition, representatives of credit card issuers argued that they do not offer credit to those likely to become financially bankrupt because they do not want to experience larger losses from higher-risk borrowers. Because card accounts belonging to cardholders that filed for bankruptcy account for a sizeable portion of issuers’ charge-offs, card issuers do not want to acquire new customers with high credit risk who may subsequently file for bankruptcy. However, one academic researcher noted that, if card issuers could increase their revenue and profits by offering cards to more customers, including those with lower creditworthiness, they could reasonably be expected to do so until the amount of expected losses from bankruptcies becomes larger than the expected additional revenues from the new customers. In examining the relationship between the consumer credit industry and bankruptcy, the Federal Reserve Board’s 2006 report comes to many of the same conclusions as the studies of other researchers we reviewed. The Federal Reserve Board’s report notes that despite large growth in the proportion of households with credit cards and the rise in overall credit card debt in recent decades, the debt-burden ratio and other potential measures of financial distress have not significantly changed over this period. The report also found that, while data on bankruptcy filings indicate that most filers have accumulated consumer debt and the proportion of filings and rise in revolving consumer debt have risen in tandem, the decision to file for bankruptcy is complex and tends to be driven by distress arising from life events such as job loss, divorce, or uninsured illness. While the effect of credit card penalty interest charges and fees on consumer bankruptcies was unclear, such charges do reduce the ability of cardholders to reduce their overall indebtedness. Generally, any penalty charges that cardholders pay would consume funds that could have been used to repay principal. Figure 16 below, compares two hypothetical cardholders with identical initial outstanding balances of $2,000 that each make monthly payments of $100. The figure shows how the total amounts of principal are paid down by each of these two cardholders over the course of 12 months, if penalty interest and fees apply. Specifically, cardholder A (1) is assessed a late payment fee in three of those months and (2) has his interest rate increased to a penalty rate of 29 percent after 6 months, while cardholder B does not experience any fees or penalty interest charges. At the end of 12 months, the penalty and fees results in cardholder A paying down $260 or 27 percent less of the total balance owed than does cardholder B who makes on-time payments for the entire period. In reviewing academic literature, hearings, and comment letters to the Federal Reserve, we identified some court cases, including some involving the top six issuers, that indicated that cardholders paid large amounts of penalty interest and fees. For example: In a collections case in Ohio, the $1,963 balance on one cardholder’s credit card grew by 183 percent to $5,564 over 6 years, despite the cardholder making few new purchases. According to the court’s records, although the cardholder made payments totaling $3,492 over this period, the holder’s balance grew as the result of fees and interest charges. According to the court’s determinations, between 1997 and 2003, the cardholder was assessed a total of $9,056, including $1,518 in over-limit fees, $1,160 in late fees, $369 in credit insurance, and $6,009 in interest charges and other fees. Although the card issuer had sued to collect, the judge rejected the issuer’s collection demand, noting that the cardholder was the victim of unreasonable, unconscionable practices. In a June 2004 bankruptcy case filed in the U.S. Bankruptcy Court for the Eastern District of Virginia, the debtor objected to the proofs of claim filed by two companies that had been assigned the debt outstanding on two of the debtor’s credit cards. One of the assignees submitted monthly statements for the credit card account it had assumed. The court noted that over a two-year period (during which balance on the account increased from $4,888 to $5,499), the debtor made only $236 in purchases on the account, while making $3,058 in payments, all of which had gone to pay finance charges, late charges, over-limit fees, bad check fees and phone payment fees. In a bankruptcy court case filed in July 2003 in North Carolina, 18 debtors filed objections to the claims by one card issuer of the amounts owed on their credit cards. In response to an inquiry by the judge, the card issuer provided data for these accounts that showed that, in the aggregate, 57 percent of the amounts owed by these 18 accounts at time of their bankruptcy filings represented interest charges and fees. However, the high percentage of interest and fees on these accounts may stem from the size of these principal balances, as some were as low as $95 and none was larger than $1,200. Regulatory interagency guidance published in 2003 for all depository institutions that issue credit cards may have reduced the potential for cardholders who continue to make minimum payments to experience increasing balances. In this guidance, regulators suggested that card issuers require minimum repayment amounts so that cardholders’ current balance would be paid off–amortized–over a reasonable amount of time. In the past, some issuers’ minimum monthly payment formulas were such that a full payment may have resulted in little or no principal being paid down, particularly if the cardholder also was assessed any fees during a billing cycle. In such cases, these cardholders’ outstanding balances would increase (or negatively amortize). In response to this guidance, some card issuers we interviewed indicated that they have been changing their minimum monthly payment formulas to ensure that credit card balances will be paid off over a reasonable period by including at least some amount of principal in each payment due. Representatives of card issuers also told us that the regulatory guidance, issued in 2003, addressing credit card workout programs—which allow a distressed cardholder’s account to be closed and repaid on a fixed repayment schedule—and other forbearance practices, may help cardholders experiencing financial distress avoid fees. In this guidance, the regulators stated that (1) any workout program offered by an issuer should be designed to have cardholders repay credit card debt within 60 months and (2) to meet this time frame, interest rates and penalty fees may have to be substantially reduced or eliminated so that principal can be repaid. As a result, card issuers are expected to stop imposing penalty fees and interest charges on delinquent card accounts or hardship card accounts enrolled in repayment workout programs. According to this guidance, issuers also can negotiate settlement agreements with cardholders by forgiving a portion of the amount owed. In exchange, a cardholder can be expected to pay the remaining balance either in a lump-sum payment or by amortizing the balance over a several month period. Staff from OCC and an association of credit counselors told us that, since the issuance of this guidance, they have noticed that card issuers are increasingly both reducing and waiving fees for cardholders who get into financial difficulty. OCC officials also indicated that issuers prefer to facilitate repayment of principal when borrowers adopt debt management plans and tend to reduce or waive fees so the accounts can be amortized. On the other hand, FDIC staff indicated that criteria for waiving fees and penalties are not publicly disclosed to cardholders. These staff noted that most fee waivers occurs after cardholders call and complain to the issuer and are handled on a case-by- case basis. Card issuers generally charge-off credit card loans that are no longer collectible because they are in default for either missing a series of payments or filing for bankruptcy. According to the data provided by the six largest issuers, the number of accounts that these issuers collectively had to charge off as a result of the cardholders filing for bankruptcy ranged from about 1.3 million to 1.6 million annually between 2003 and 2005. Collectively, these represented about 1 percent of the six issuers’ active accounts during this period. Also, about 60 percent of the accounts were 2 or more months delinquent at the time of the charge-off. Most of the cardholders whose accounts were charged off as the result of a bankruptcy owed small amounts of fees and interest charges at the time of their bankruptcy filing. According to the data the six issuers provided, the average account that they charged off in 2005 owed approximately $6,200 at the time that bankruptcy was filed. Of this amount, the issuers reported that on average 8 percent represented unpaid interest charges; 2 percent unpaid fees, including any unpaid penalty charges; and about 90 percent principal. However, these data do not provide complete information about the extent to which the financial condition of the cardholders may have been affected by penalty interest and fee charges. First, the amounts that these issuers reported to us as interest and fees due represent only the unpaid amounts that were owed at the time of bankruptcy. According to representatives of the issuers we contacted, each of their firms allocates the amount of any payment received from their customers first to any outstanding interest charges and fees, then allocates any remainder to the principal balance. As a result, the amounts owed at the time of bankruptcy would not reflect any previously paid fees or interest charges. According to representatives of these issuers, data system and recordkeeping limitations prevented them from providing us the amounts of penalty interest and fees assessed on these accounts in the months prior to the bankruptcy filings. Furthermore, the data do not include information on all of the issuers’ cardholders who went bankrupt, but only those whose accounts the issuers charged off as the result of a bankruptcy filing. The issuers also charge off the amounts owed by customers who are delinquent on their payments by more than 180 days, and some of those cardholders may subsequently file for bankruptcy. Such accounts may have accrued larger amounts of unpaid penalty interest and fees than the accounts that were charged off for bankruptcy after being delinquent for less than 180 days, because they would have had more time to be assessed such charges. Representatives of the six issuers told us that they do not maintain records on these customers after they are charged off, and, in many cases, they sell the accounts to collection firms. Determining the extent to which penalty interest charges and fees contribute to issuers’ revenues and profits was difficult because issuers’ regulatory filings and other public sources do not include such detail. According to bank regulators, industry analysts, and information reported by the five largest issuers, we estimate that the majority of issuer revenues—around 70 percent in recent years—came from interest charges, and the portion attributable to penalty rates appears to be growing. Of the remaining issuer revenues, penalty fees had increased and were estimated to represent around 10 percent of total issuer revenues. The remainder of issuer revenues came from fees that issuers receive for processing merchants’ card transactions and other types of consumer fees. The largest credit card-issuing banks, which are generally the most profitable group of lenders, have not greatly increased their profitability over the last 20 years. Determining the extent to which penalty interest and fee charges are contributing to card issuer revenues and profits is difficult because limited information is available from publicly disclosed financial information. Credit card-issuing banks are subject to various regulations that require them to publicly disclose information about their revenues and expenses. As insured commercial banks, these institutions must file reports of their financial condition, known as call reports, each quarter with their respective federal regulatory agency. In call reports, the banks provide comprehensive balance sheets and income statements disclosing their earnings, including those from their credit card operations. Although the call reports include separate lines for interest income earned, this amount is not further segregated to show, for example, income from the application of penalty interest rates. Similarly, banks report their fee income on the call reports, but this amount includes income from all types of fees, including those related to fiduciary activities, and trading assets and liabilities and is not further segregated to show how much a particular bank has earned from credit card late fees, over-limit fees, or insufficient payment fees. Another limitation of using call reports to assess the effect of penalty charges on bank revenues is that these reports do not include detailed information on credit card balances that a bank may have sold to other investors through a securitization. As a way of raising additional funds to lend to cardholders, many issuers combine the balances owed on large groups of their accounts and sell these receivables as part of pools of securitized assets to investors. In their call reports, the banks do not report revenue received from cardholders whose balances have been sold into credit card interest and fee income categories. The banks report any gains or losses incurred from the sale of these pooled credit card balances on their call reports as part of noninterest income. Credit card issuing banks generally securitize more than 50 percent of their credit card balances. Although many card issuers, including most of the top 10 banks, are public companies that must file various publicly available financial disclosures on an ongoing basis with securities regulators, these filings also do not disclose detailed information about penalty interest and fees. We reviewed the public filings by the top five issuers and found that none of the financial statements disaggregated interest income into standard interest and penalty interest charges. In addition, we found that the five banks’ public financial statements also had not disaggregated their fee income into penalty fees, service fees, and interchange fees. Instead, most of these card issuers disaggregated their sources of revenue into two broad categories— interest and noninterest income. Although limited information is publicly disclosed, the majority of credit card revenue appears to have come from interest charges. According to regulators, information collected by firms that analyze the credit card industry, and data reported to us by the five of the six largest issuers, the proportion of net interest revenues to card issuers’ total revenues is as much as 71 percent. For example, five of the six largest issuers that provided data to us reported that the proportion of their total U.S. card operations income derived from interest charges ranged from 69 to 71 percent between 2003 and 2005. We could not precisely determine the extent to which penalty interest charges contribute to this revenue, although the amount of penalty interest that issuers have been assessing has increased. In response to our request, the six largest issuers reported the proportions of their total cardholder accounts that were assessed various rates of interest for 2003 to 2005. On the basis of our analysis of the popular cards issued by these largest issuers, all were charging, on average, default interest rates of around 27 percent. According to the data these issuers provided, the majority of cardholders paid interest rates below 20 percent, but the proportion of their cardholders that paid interest rates at or above 25 percent—which likely represent default rates—has risen from 5 percent in 2003 to 11 percent in 2005. As shown in Figure 18, the proportion of cardholders paying between 15 and 20 percent has also increased, but an issuer representative told us that this likely was due to variable interest rates on cards rising as a result of increases in U.S. market interest rates over the last 3 years. Although we could not determine the amounts of penalty interest the card issuers received, the increasing proportion of accounts assessed rates of 25 percent suggests a significant increase in interest revenues. For example, a cardholder carrying a stable balance of $1,000 and paying 10 percent interest would pay approximately $100 annually, while a cardholder carrying the same stable balance but paying 25 percent would pay $250 to the card issuer annually. Although we did not obtain any information on the size of balances owed by the cardholders of the largest issuers, the proportion of the revenues these issuers received from cardholders paying penalty interest rates may also be greater than 11 percent because such cardholders may have balances larger than the $2,500 average for 2005 that the issuers reported to us. The remaining card issuer revenues largely come from noninterest sources, including merchant and consumer fees. Among these are penalty fees and other consumer fees, as well as fees that issuers receive as part of processing card transactions for merchants. Although no comprehensive data exist publicly, various sources we identified indicated that penalty fees represent around 10 percent of issuers’ total revenues and had generally increased. We identified various sources that gave estimates of penalty fee income as a percentage of card issuers’ total revenues that ranged from 9 to 13 percent: Analysis of the data the top six issuers provided to us indicated that each of these issuers assessed an average of about $1.2 billion in penalty fees for cardholders that made late payments or exceeded their credit limit in 2005. In total, these six issuers reported assessing $7.4 billion for these two penalty fees that year, about 12 percent of the $60.3 billion in total interest and consumer fees (penalty fees and fees for other cardholder services). According to a private firm that assists credit card banks with buying and selling portfolios of credit card balance receivables, penalty fees likely represented about 13 percent of total card issuer revenues. According to an official with this firm, it calculated this estimate by using information from 15 of the top 20 issuers, as well as many smaller banks, that together represent up to 80 percent of the total credit card industry. An estimate from an industry research firm that publishes data on credit card issuer activities indicated that penalty fees represented about 9 percent of issuer total revenues. When a consumer makes a purchase with a credit card, the merchant selling the goods does not receive the full purchase price. When the cardholder presents the credit card to make a purchase, the merchant transmits the cardholder’s account number and the amount of the transaction to the merchant’s bank. The merchant’s bank forwards this information to the card association, such as Visa or Mastercard, requesting authorization for the transaction. The card association forwards the authorization request to the bank that issued the card to the cardholder. The issuing bank then responds with its authorization or denial to the merchant’s bank and then to the merchant. After the transaction is approved, the issuing bank will send the purchase amount, less an interchange fee, to the merchant’s bank. The interchange fee is established by the card association. Before crediting the merchant’s account, the merchant’s bank will subtract a servicing fee. These transaction fees— called interchange fees—are commonly about 2 percent of the total purchase price. As shown in figure 19, the issuing banks generally earn about $2.00 for every $100 purchased as interchange fee revenue. In addition, the card association receives a transaction processing fee. The card associations, such as Visa or Mastercard, assess the amount of these fees and also conduct other important activities, including imposing rules for issuing cards, authorizing, clearing and settling transactions, advertising and promoting the network brand, and allocating revenues among the merchants, merchant’s bank, and card issuer. In addition to penalty fees and interchange fees, the remaining noninterest revenues for card issuers include other consumer fees or other fees. Card issuers collect annual fees, cash advance fees, balance transfer fees, and other fees from their cardholders. In addition, card issuers collect other revenues, such as from credit insurance. According to estimates by industry analyst firms, such revenues likely represented about 8 to 9 percent of total issuer revenues. The profits of credit card-issuing banks, which are generally the most profitable group of lenders, have been stable over the last 7 years. A commonly used indicator of profitability is the return on assets ratio (ROA). This ratio, which is calculated by dividing a company's income by its total assets, shows how effectively a business uses its assets to generate profits. In annual reports to Congress, the Federal Reserve provides data on the profitability of larger credit card issuers—which included 17 banks in 2004. Figure 20 shows the average ROA using pretax income for these large credit card issuers compared with pretax ROA of all commercial banks during the period 1986 to 2004. In general, the large credit card issuers earned an average return of 3.12 percent over this period, which was more than twice as much as the 1.49 percent average returns earned by all commercial banks. As shown in the figure above, the ROA for larger credit card banks, although fluctuating more widely during the 1990s, has generally been stable since 1999, with returns in the 3.0 to 3.5 percent range. The return on assets for the large card issuers peaked in 1993 at 4.1 percent and has declined to 3.55 percent in 2004. In contrast, the profitability of all commercial banks has been generally increasing over this period, rising more than 140 percent between 1986 and 2004. Similar to the data for all larger credit card issuers, data that five of the six largest issuers provided to us indicated that their profitability also has been stable in the 3 years between 2003 and 2005. These five issuers reported that the return on their pretax earnings over their credit card balances over this 3-year period ranged from about 3.6 percent to 4.1 percent. Because of the high interest rates that issuers charge and variable rate pricing, credit card lending generally is the most profitable type of consumer lending, despite the higher rate of loan losses that issuers incur on cards. Rates charged on credit cards generally are the highest of any consumer lending category because they are extensions of credit that are not secured by any collateral from the borrower. In contrast, other common types of consumer lending, such as automobile loans or home mortgages, involve the extension of a fixed amount of credit under fixed terms of repayment that are secured by the underlying asset—the car or the house—which the lender can repossess in the event of nonpayment by the borrower. Collateral and fixed repayment terms reduce the risk of loss to the lender, enabling them to charge lower interest rates on such loans. In contrast, credit card loans, which are unsecured, available to large and heterogeneous populations, and repayable on flexible terms at the cardholders’ convenience, present greater risks and have commensurately higher interest rates. For example, according to Federal Reserve statistics, the interest rate charged on cards by lenders generally has averaged above 16 percent since 1980, while the average rate charged on car loans since then has averaged around 10 percent. Borrowers may be more likely to cease making payments on their credit cards if they become financially distressed than they would on other loans that are secured by an asset they could lose. For example, the percentage of credit card loans that banks have had to charge off averaged above 4 percent between 2003 and 2005; in contrast, charge-offs for other types of consumer loans average about 2 percent, with charge-offs for mortgage loans averaging less than 1 percent, during those 3 years. (App. III provides additional detail about the factors that affect the profitability of credit card issuers.) Credit cards provide various benefits to their cardholders, including serving as a convenient way to pay for goods and services and providing additional funds at rates of interest generally lower than those consumers would have paid to borrow on cards in the past. However, the penalties for late payments or other behaviors involving card use have risen significantly in recent years. Card issuers note that their use of risk-based pricing structures with multiple interest rates and fees has allowed them to offer credit cards to cardholders at costs that are commensurate with the risks presented by different types of customers, including those who previously might not have been able to obtain credit cards. On the whole, a large number of cardholders experience greater benefits—either by using their cards for transactions without incurring any direct expense or by enjoying generally lower costs for borrowing than prevailed in the past—from using credit cards than was previously possible, but the habits or financial circumstances of other cardholders also could result in these consumers facing greater costs than they did in the past. The expansion and increased complexity of card rates, fees, and issuer practices has heightened the need for consumers to receive clear disclosures that allow them to more easily understand the costs of using cards. In the absence of any regulatory or legal limits on the interest or fees that cards can impose, providing consumers with adequate information on credit card costs and practices is critical to ensuring that vigorous competition among card issuers produces a market that provides the best possible rates and terms for U.S. consumers. Our work indicates that the disclosure materials that the largest card issuers typically provided under the existing regulations governing credit cards had many serious weaknesses that reduced their usefulness to the consumers they are intended to help. Although these regulations likely were adequate when card rates and terms were less complex, the disclosure materials they produce for cards today, which have a multitude of terms and conditions that can affect cardholders’ costs, have proven difficult for consumers to use in finding and understanding important information about their cards. Although providing some key information, current disclosures also give prominence to terms, such as minimum finance charge or balance computation method, that are less significant to consumers’ costs and do not adequately emphasize terms such as those cardholder actions that could cause their card issuer to raise their interest rate to a high default rate. Because part of the reason that current disclosure materials may be less effective is that they were designed in an era when card rates and terms were less complex, the Federal Reserve also faces the challenge of creating disclosure requirements that are more flexible to allow them to be adjusted more quickly as new card features are introduced and others become less common. The Federal Reserve, which has adopted these regulations, has recognized these problems, and its current review of the open-end credit rules of Regulation Z presents an opportunity to improve the disclosures applicable to credit cards. Based on our work, we believe that disclosures that are simpler, better organized, and use designs and formats that comply with best practices and industry standards for readability and usability would be more effective. Our work and the experiences of other regulators also confirmed that involving experts in readability and testing documents with actual consumers can further improve any resulting disclosures. The Federal Reserve has indicated that it has begun to involve consumers in the design of new model disclosures, but it has not completed these efforts to date, and new model disclosures are not expected to be issued until 2007 or 2008. Federal Reserve staff noted that they recognize the challenge of how best to incorporate the variety of information that consumers may need to understand the costs of their cards in clear and concise disclosure materials. Until such efforts are complete, consumers will continue to face difficulties in using disclosure materials to better understand and compare costs of credit cards. In addition, until more understandable disclosures are issued, the ability of well-informed consumers to spur additional competition among issuers in credit card pricing is hampered. Definitively determining the extent to which credit card penalty interest and fees contribute to personal bankruptcies and the profits and revenues of card issuers is difficult given the lack of comprehensive, publicly available data. Penalty interest and fees can contribute to the total debt owed by cardholders and decrease the funds that a cardholder could have used to reduce debt and possibly avoid bankruptcy. However, many consumers file for bankruptcy as the result of significant negative life events, such as divorces, job losses, or health problems, and the role that credit cards play in avoiding or accelerating such filings is not known. Similarly, the limited available information on card issuer operations indicates that penalty fees and interest are a small but growing part of such firms’ revenues. With the profitability of the largest card issuers generally being stable over recent years, the increased revenues gained from penalty interest and fees may be offsetting the generally lower amounts of interest that card issuers collect from the majority of their cardholders. These results appear to indicate that while most cardholders likely are better off, a smaller number of cardholders paying penalty interest and fees are accounting for more of issuer revenues than they did in the past. This further emphasizes the importance of taking steps to ensure that all cardholders receive disclosures that help them clearly understand their card costs and how their own behavior can affect those costs. As part of its effort to increase the effectiveness of disclosure materials used to inform consumers of rates, fees, and other terms that affect the costs of using credit cards, the Chairman, Federal Reserve should ensure that such disclosures, including model forms and formatting requirements, more clearly emphasize those terms that can significantly affect cardholder costs, such as the actions that can cause default or other penalty pricing rates to be imposed. We provided a draft of this report to the Federal Reserve, OCC, FDIC, the Federal Trade Commission, the National Credit Union Administration, and the Office of Thrift Supervision for their review and comment. In a letter from the Federal Reserve, the Director of the Division of Consumer and Community Affairs agreed with the findings of our report that credit card pricing has become more complex and that the disclosures required under Regulation Z could be improved with the input of consumers. To this end, the Director stated that the Board is conducting extensive consumer testing to identify the most important information to consumers and how disclosures can be simplified to reduce current complexity. Using this information, the Director said that the Board would develop new model disclosure forms with the assistance of design consultants. If appropriate, the Director said the Board may develop suggestions for statutory changes for congressional consideration. We also received technical comments from the Federal Reserve and OCC, which we have incorporated in this report as appropriate. FDIC, the Federal Trade Commission, the National Credit Union Administration, and the Office of Thrift Supervision did not provide comments. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to the Chairman, Permanent Subcommittee on Investigations, Senate Committee on Homeland Security and Governmental Affairs; the Chairman, FDIC; the Chairman, Federal Reserve; the Chairman, Federal Trade Commission; the Chairman, National Credit Union Administration; the Comptroller of the Currency; and the Director, Office of Thrift Supervision and to interested congressional committees. We will also make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to determine (1) how the interest, fees, and other practices that affect the pricing structure of cards from the largest U.S. issuers have evolved, and cardholders’ experiences under these pricing structures in recent years; (2) how effectively the issuers disclose the pricing structures of cards to their cardholders; (3) whether credit card debt and penalty interest and fees contribute to cardholder bankruptcies; and (4) the extent to which penalty interest and fees contribute to the revenues and profitability of issuers’ credit card operations. To identify how the pricing structure of cards from the largest U.S. issuers has evolved, we analyzed disclosure documents from 2003 to 2005 for 28 popular cards that were issued by the six largest U.S. card issuers, as measured by total outstanding receivables as of December 31, 2004 (see fig. 2 in the body of this report). These issuers were Bank of America; Capital One Bank; Chase Bank USA, N.A.; Citibank (South Dakota), N.A.; Discover Financial Services; and MBNA America Bank, N.A. Representatives for these six issuers identified up to five of their most popular cards and provided us actual disclosure materials, including cardmember agreements and direct mail applications and solicitations used for opening an account for each card. We calculated descriptive statistics for various interest rates and fees and the frequency with which cards featured other practices, such as methods for calculating finance charges. We determined that these cards likely represented the pricing and terms that applied to the majority of U.S. cardholders because the top six issuers held almost 80 percent of consumer credit card debt and as much as 61 percent of total U.S. credit card accounts. We did not include in our analysis of popular cards any cards offered by credit card issuers that engage primarily in subprime lending. Subprime lending generally refers to extending credit to borrowers who exhibit characteristics indicating a significantly higher risk of default than traditional bank lending customers. Such issuers could have pricing structures and other terms significantly different to those of the popular cards offered by the top issuers. As a result, our analysis may underestimate the range of interest rate and fee levels charged on the entire universe of cards. To identify historical rate and fee levels, we primarily evaluated the Federal Reserve Board’s G.19 Consumer Credit statistical release for 1972 to 2005 and a paper written by a Federal Reserve Bank staff, which included more than 150 cardmember agreements from 15 of the largest U.S. issuers in 1997 to 2002. To evaluate cardholders’ experiences with credit card pricing structures in recent years, we obtained proprietary data on the extent to which issuers assessed various interest rate levels and fees for active accounts from the six largest U.S. issuers listed above for 2003, 2004, and 2005. We obtained data directly from issuers because no comprehensive sources existed to show the extent to which U.S. cardholders were paying penalty interest rates. Combined, these issuers reported more than 180 million active accounts, or about 60 percent of total active accounts reported by CardWeb.com, Inc. These accounts also represented almost $900 billion in credit card purchases in 2005, according to these issuers. To preserve the anonymity of the data, these issuers engaged legal counsel at the law firm Latham & Watkins, LLP, to which they provided their data on interest rate and fee assessments, which then engaged Argus Information and Advisory Services, LLC, a third-party analytics firm, to aggregate the data, and then supplied it to us. Although we originally provided a more comprehensive data request to these issuers, we agreed to a more limited request with issuer representatives as a result of these firms’ data availability and processing limitations. We discussed steps that were taken to attempt to ensure that the data provided to us were complete and accurate with representatives of these issuers and the third party analytics firm. We also shared a draft of this report with the supervisory agencies of these issuers. However, we did not have access to the issuers’ data systems to fully assess the reliability of the data or the systems that housed them. Therefore, we present these data in our report only as representations made to us by the six largest issuers. To determine how effectively card issuers disclose to cardholders the rates, fees, and other terms related to their credit cards, we contracted with UserWorks, Inc., a private usability consulting firm, which conducted three separate evaluations of a sample of disclosure materials. We provided the usability consultant with a cardmember agreement and solicitation letter for one card from four representative credit card issuers—a total of four cards and eight disclosure documents. The first evaluation, a readability assessment, used computer-facilitated formulas to predict the grade level required to understand the materials. Readability formulas measure the elements of writing that can be subjected to mathematical calculation, such as average number of syllables in words or numbers of words in sentences in the text. The consultant applied the following industry-standard formulas to the documents: Flesch Grade Level, Frequency of Gobbledygook (FOG), and the Simplified Measure of Gobbledygook (SMOG). Using these formulas, the consultant measured the grade levels at which the disclosure documents were written overall, as well as for selected sections. Secondly, the usability consultant conducted an heuristic evaluation that assessed how well these card disclosure documents adhered to a recognized set of principles or industry best practices. In the absence of best practices specifically applicable to credit card disclosures, the consultant used guidelines from the U.S. Securities and Exchange Commission’s 1998 guidebook Plain English Handbook: How to Create Clear SEC Disclosure Documents. Finally, the usability consultant tested how well actual consumers were able to use the documents to identify and understand information about card fees and other practices and used the results to identify problem areas. The consultant conducted these tests with 12 consumers. To ensure sample diversity, the participants were selected to represent the demographics of the U.S. adult population in terms of education, income, and age. While the materials used for the readability and usability assessments appeared to be typical of the large issuers’ disclosures, the results cannot be generalized to materials that were not reviewed. To obtain additional information on consumers’ level of awareness and understanding of their key credit card terms, we also conducted in-depth, structured interviews in December 2005 with a total of 112 adult cardholders in three locations: Boston, Chicago, and San Francisco. We contracted with OneWorld Communications, Inc., a market research organization, to recruit a sample of cardholders that generally resembled the demographic makeup of the U.S. population in terms of age, education levels, and income. However, the cardholders recruited for the interviews did not form a random, statistically representative sample of the U.S. population and therefore cannot be generalized to the population of all U.S. cardholders. Cardholders had to speak English, have owned at least one general-purpose credit card for a minimum of 12 months, and have not participated in more than one focus group or similar in-person study in the 12 months prior to the interview. We gathered information about the cardholders’ knowledge of credit card terms and conditions, and assessed cardholders’ use of card disclosure materials by asking them a number of open- and closed-ended questions. To determine whether credit card debt and penalty interest and fees contribute to cardholder bankruptcies, we interviewed Department of Justice staff responsible for overseeing bankruptcy courts and trustees about the availability of data on credit card penalty charges in materials submitted by consumers or issuers as part of bankruptcy filings or collections cases. We also interviewed two attorneys that assist consumers with bankruptcy filings. In addition, we reviewed studies that analyzed credit card and bankruptcy issues published by various academic researchers, the Congressional Research Service, and the Congressional Budget Office. We did not attempt to assess the reliability of all of these studies to the same, full extent. However, because of the prominence of some of these data sources, and frequency of use of this data by other researchers, as well as the fact that much of the evidence is corroborated by other evidence, we determined that citing these studies was appropriate. We also analyzed aggregated card account data provided by the six largest issuers (as previously discussed) to measure the amount of credit card interest charges and fees owed at the time these accounts were charged off as a result of becoming subject to bankruptcy filing. We also spoke with representatives of the largest U.S. credit card issuers, as well as representatives of consumer groups and industry associations, and with academic researchers that conduct analysis on the credit card industry. To determine the extent to which penalty interest and fees contributed to the revenues and profitability of issuers’ credit card operations, we reviewed the extent to which penalty charges are disclosed in bank regulatory reports—the call reports—and in public disclosures—such as annual reports (10-Ks) and quarterly reports (10-Qs) made by publicly traded card issuers. We analyzed data reported by the Federal Reserve on the profitability of commercial bank card issuers with at least $200 million in yearly average assets (loans to individuals plus securitizations) and at least 50 percent of assets in consumer lending, of which 90 percent must be in the form of revolving credit. In 2004, the Federal Reserve reported that 17 banks had card operations with at least this level of activity in 2004. We also analyzed information from the Federal Deposit Insurance Corporation, which analyzes data for all federally insured banks and savings institutions and publishes aggregated data on those with various lending activity concentrations, including a group of 33 banks that, as of December 2005, had credit card operations that exceeded 50 percent of their total assets and securitized receivables. We also analyzed data reported to us by the six largest card issuers on their revenues and profitability of their credit card operations for 2003, 2004, and 2005. We also reviewed data on revenues compiled by industry analysis firms, including Card Industry Directory published by Sourcemedia, and R.K. Hammer. Because of the proprietary nature of their data, representatives for Sourcemedia and R.K. Hammer were not able to provide us with information sufficient for us to assess the reliability of their data. However, we analyzed and presented some information from these sources because we were able to corroborate their information with each other and with data from sources of known reliability, such as regulatory data, and we attribute their data to them. We also interviewed broker-dealer financial analysts who monitor activities by credit card issuers to identify the extent to which various sources of income contribute to card issuers’ revenues and profitability. We attempted to obtain the latest in a series of studies of card issuer profitability that Visa, Inc. traditionally has compiled. However, staff from this organization said that this report is no longer being made publicly available. We discussed issues relevant to this report with various organizations, including representatives of 13 U.S. credit card issuers and card networks, 2 trade associations, 4 academics, 4 federal bank agencies, 4 national consumer interest groups, 2 broker dealer analysts that study credit card issuers for large investors, and a commercial credit-rating agency. We also obtained technical comments on a draft of this report from representatives of the issuers that supplied data for this study. Consumer bankruptcies have increased significantly over the past 25 years. As shown in figure 21 below, consumer bankruptcy filings rose from about 287,000 in 1980 to more than 2 million as of December 31, 2005, about a 609 percent increase over the last 25 years. The expansion of consumers’ overall indebtedness is one of the explanations cited for the significant increase in bankruptcy filings. As shown in figure 22, consumers’ use of debt has expanded over the last 25 years, increasing more than 720 percent from about $1.4 trillion in 1980 to about $11.5 trillion in 2005. Some researchers have been commenting on the rise in overall indebtedness as a contributor to the rise in bankruptcies for some time. For example, in a 1997 congressional testimony, a Congressional Budget Office official noted that the increase in consumer bankruptcy filings and the increase in household indebtedness appeared to be correlated. Also, an academic paper that summarized existing literature on bankruptcy found that some consumer bankruptcies were either directly or indirectly caused by heavy consumer indebtedness, specifically pointing to the high correlation between consumer bankruptcies and consumer debt-to-income ratios. Beyond total debt, some researchers and others argue that the rise in bankruptcies also was related to the rise in credit debt, in particular. As shown in figure 23, the amount of credit card debt reported also has risen from $237 billion to about $802 billion—a 238 percent increase between 1990 and 2005. Rather than total credit card debt alone, some researchers argued that growth in credit card use and indebtedness by lower-income households has contributed to the rise in bankruptcies. In the survey of consumer finances conducted every 3 years, the Federal Reserve reports on the use and indebtedness on credit cards by households overall and also by income percentiles. As shown in figure 24 below, the latest Federal Reserve survey results indicated the greatest increase of families reporting credit card debt occurred among those in the lowest 20 percent of household income between 1998 and 2001. In the last 15 years, credit card companies have greatly expanded the marketing of credit cards, including to households with lower incomes than previously had been offered cards. An effort by credit card issuers to expand its customer base in an increasingly competitive market dramatically increased credit card solicitations. According to one study, more than half of credit cards held by consumers are the result of receiving mail solicitations. According to another academic research paper, credit card issuers have increased the number of mail solicitations they send to consumers by more than five times since 1990, from 1.1 billion to 5.23 billion in 2004, or a little over 47 solicitations per household. The research paper also found that wealthier families receive the highest number of solicitations but that low-income families were more likely to open them. As shown in figure 25 above, the Federal Reserve’s survey results indicated that the number of lower income households with credit cards has also grown the most during 1998 to 2001, reflecting issuers’ willingness to grant greater access to credit cards to such households than in the past. The ability of households to make the payments on their debt appeared to be keeping pace with their incomes as their total household debt burden levels—which measure their payments required on their debts as percentage of household incomes—have remained relatively constant since the 1980s. As shown below in figure 25, Federal Reserve statistics show that the aggregate debt burden ratio for U.S. households has generally fluctuated between 10.77 percent to 13.89 percent between 1990 to 2005, which are similar to the levels for this ratio that were observed during the 1980s. Also shown in figure 25 are the Federal Reserve’s statistics on the household financial obligations ratio, which compares the total payments that a household must make for mortgages, consumer debt, auto leases, rent, homeowners insurance, and real estate taxes to its after- tax income. Although this ratio has risen from around 16 percent in 1980 to over 18 percent in 2005—representing an approximately 13 percent increase—Federal Reserve staff researchers indicated that it does not necessarily indicate an increase in household financial stress because much of this increase appeared to be the result of increased use of credit cards for transactions and more households with cards. In addition, credit card debt remains a small portion of overall household debt, including those with the lowest income levels. As shown in table 2, credit card balances as a percentage of total household debt actually have been declining since the 1990s. Board of Governors of the Federal Reserve System, Report to the Congress on Practices of the Consumer Credit Industry in Soliciting and Extending Credit and their Effects on Consumer Debt and Insolvency (Washington, D.C.: June 2006). Also, as shown in table 3, median credit card balances for the lowest- income households has remained stable from 1998 through 2004. As shown in figure 26 below, the number of households in the twentieth percentile of income or less that reportedly were in financial distress has remained relatively stable. As shown in figure 26 above, more lower-income households generally reported being in financial distress than did other households in most of the other higher-income groups. In addition, the lowest-income households in the aggregate generally did not exhibit greater levels of distress over the last 20 years, as the proportion of households that reported distress was higher in the 1990s than in 2004. Some academics, consumer advocacy groups, and others have indicated that the rise in consumer bankruptcy filings has occurred because the normal life events that reduce incomes or increase expenses for households have more serious effects today. Events that can reduce household incomes include job losses, pay cuts, or conversion of full-time positions to part-time work. Medical emergencies can result in increased household expenses and debts. Divorces can both reduce income and increase expenses. One researcher explained that, while households have faced the same kinds of risks for generations, the likelihood of these types of life events occurring has increased. This researcher’s studies noted that the likelihood of job loss or financial distress arising from medical problems and the risk of divorce have all increased. Furthermore, more households send all adults into the workforce, and, while this increases their income, it also doubles their total risk exposure, which increases their likelihood of having to file for bankruptcy. According to this researcher, about 94 percent of families who filed for bankruptcy would qualify as middle class. Although many of the people who file for bankruptcy have considerable credit card debt, those researchers that asserted that life events were the primary explanation for filings noted that the role played by credit cards varied. According to one of these researchers, individuals who have filed for bankruptcy with outstanding credit card debt could be classified into three groups: Those who had built up household debts, including substantial credit card balances, but filed for bankruptcy after experiencing a life event that adversely affected their expenses or incomes such that they could not meet their obligations. Those who experienced a life event that adversely affected their expenses or incomes, and increased their usage of credit cards to avoid falling behind on other secured debt payments (such as mortgage debt), but who ultimately failed to recover and filed for bankruptcy. Those with very little credit card debt who filed for bankruptcy when they could no longer make payments on their secured debt. This represented the smallest category of people filing for bankruptcy. Various factors help to explain why banks that focus on credit card lending generally have higher profitability than other lenders. The major source of income for credit card issuers comes from interest they earn from their cardholders who carry balances—that is, do not payoff the entire outstanding balance when due. One factor that contributes to the high profitability of credit card operations is that the average interest rates charged on credit cards are generally higher than rates charged on other types of lending. Rates charged on credit cards are generally the highest because they are extensions of credit that are not secured by any collateral from the borrower. Unlike credit cards, most other types of consumer lending involve the extension of a fixed amount of credit under fixed terms of repayment (i.e., the borrower must repay an established amount of principal, plus interest each month) and are collateralized—such as loans for cars, under which the lender can repossess the car in the event the borrower does not make the scheduled loan payments. Similarly, mortgage loans that allow borrowers to purchase homes are secured by the underlying house. Loans with collateral and fixed repayment terms pose less risk of loss, and thus lenders can charge less interest on such loans. In contrast, credit card loans, which are unsecured, available to large and heterogeneous populations, and can be repaid on flexible terms at the cardholders’ convenience, present greater risks and have commensurately higher interest rates. As shown in figure 27, data from the Federal Reserve shows that average interest rates charged on credit cards were generally higher than interest rates charged on car loans and personal loans. Similarly, average interest rates charged on corporate loans are also generally lower than credit cards, with the best business customers often paying the prime rate, which averaged 6.19 percent during 2005. Moreover, many card issuers have increasingly begun setting the interest rates they charge their cardholders using variable rates that change as a specified market index rate, such as the prime rate, changes. This allows credit card issuers’ interest revenues to rise as their cost of funding rises during times when market interest rates are increasing. Of the most popular cards issued by the largest card issuers between 2004 and 2005 that we analyzed, more than 90 percent had variable rates that changed according to an index rate. For example, the rate that the cardholder would pay on these large issuer cards was determined by adding between 6 and 8 percent to the current prime rate, with a new rate being calculated monthly. As a result of the higher interest charges assessed on cards and variable rate pricing, banks that focus on credit card lending had the highest net interest margin compared with other types of lenders. The net interest income of a bank is the difference between what it has earned on its interest-bearing assets, including the balances on credit cards it has issued and the amounts loaned out as part of any other lending activities, and its interest expenses. To compare across banks, analysts calculate net interest margins, which express each banks’ net interest income as a percentage of interest-bearing assets. The Federal Deposit Insurance Corporation (FDIC) aggregates data for a group of all federally insured banks that focus on credit card lending, which it defines as those with more than 50 percent of managed assets engaged in credit card operations; in 2005, FDIC identified 33 banks with at least this much credit card lending activity. As shown in figure 28, the net interest margin of all credit card banks, which averaged more than 8 percent, was about two to three times as high as other consumer and mortgage lending activities in 2005. Five of the six largest issuers reported to us that their average net interest margin in 2005 was even higher, at 9 percent. Although profitable, credit card operations generally experience higher charge-off rates and operating expenses than those of other types of lending. Because these loans are generally unsecured, meaning the borrower will not generally immediately lose an asset—such as a car or house—if payments are not made, borrowers may be more likely to cease making payments on their credit cards if they become financially distressed than they would for other types of credit. As a result, the rate of losses that credit card issuers experience on credit cards is higher than that incurred on other types of credit. Under bank regulatory accounting practices, banks must write off the principal balance outstanding on any loan when it is determined that the bank is unlikely to collect on the debt. For credit cards, this means that banks must deduct, as a loan loss from their income, the amount of balance outstanding on any credit card accounts for which either no payments have been made within the last 180 days or the bank has received notice that the cardholder has filed for bankruptcy. This procedure is called charging the debt off. Card issuers have much higher charge-off rates compared to other consumer lending businesses as shown in figure 29. The largest credit card issuers also reported similarly high charge-off rates for their credit card operations. As shown in figure 30, five of the top six credit card issuers that we obtained data from reported that their average charge-off rate was higher than 5.5 percent between 2003 and 2005, well above other consumer lenders’ average net charge-off rate of 1.44 percent. Credit card issuers also incur higher operating expenses compared with other consumer lenders. Operating expense is another one of the largest cost items for card issuers and, according to a credit card industry research firm, accounts for approximately 37 percent of total expenses in 2005. The operating expenses of a credit card issuer include staffing and the information technology costs that are incurred to maintain cardholders’ accounts. Operating expense as a proportion of total assets for credit card lending is higher because offering credit cards often involves various activities that other lending activities do not. For example, issuers often incur significant expenses in postage and other marketing costs as part of soliciting new customers. In addition, some credit cards now provide rewards and loyalty programs that allow cardholders to earn rewards such as free airline tickets, discounts on merchandise, or cash back on their accounts, which are not generally expenses associated with other types of lending. Credit card operating expense burden also may be higher because issuers must service a large number of relatively small accounts. For example, the six large card issuers that we surveyed reported that they each had an average of 30 million credit card accounts, the average outstanding balance on these accounts was about $2,500, and 48 percent of accounts did not revolve balances in 2005. As a result, the average operating expense, as a percentage of total assets for banks, that focus on credit card lending averaged over 9 percent in 2005, as shown in figure 31, which was well above the 3.44 percent average for other consumer lenders. The largest issuers operating expenses may not be as high as all banks that focus on credit card lending because their larger operations give them some cost advantages from economies of scale. For example, they may be able to pay lower postage rates by being able to segregate the mailings of account statements to their cardholders by zip code, thus qualifying for bulk-rate discounts. Another reason that the banks that issue credit cards are more profitable than other types of lenders is that they earn greater percentage of revenues from noninterest sources, including fees, than lenders that focus more on other types of consumer lending. As shown in figure 32, FDIC data indicates that the ratio of noninterest revenues to assets—an indicator of noninterest income generated from outstanding credit loans—is about 10 percent for the banks that focus on credit card lending, compared with less than 2.8 percent for other lenders. Although penalty interest and fees apparently have increased, their effect on issuer profitability may not be as great as other factors. For example, while more cardholders appeared to be paying default rates of interest on their cards, issuers have not been experiencing greater profitability from interest revenues. According to our analysis of FDIC Quarterly Banking Profile data, the revenues that credit card issuers earn from interest generally have been stable over the last 18 years. As shown in figure 33, net interest margin for all banks that focused on credit card lending has ranged between 7.4 percent and 9.6 percent since 1987. Similarly, according to the data that five of the top six issuers provided to us, their net interest margins have been relatively stable between 2003 and 2005, ranging from 9.2 percent to 9.6 percent during this period. These data suggest that increases in penalty interest assessments could be offsetting decreases in interest revenues from other cardholders. During the last few years, card issuers have competed vigorously for market share. In doing so, they frequently have offered cards to new cardholders that feature low interest rates—including zero percent for temporary introductory periods, usually 8 months—either for purchases or sometimes for balances transferred from other cards. The extent to which cardholders now are paying such rates is not known, but the six largest issuers reported to us that the proportion of their cardholders paying interest rates below 5 percent—which could be cardholders enjoying temporarily low introductory rates—represented about 7 percent of their cardholders between 2003 and 2005. To the extent that card issuers have been receiving lower interest as the result of these marketing efforts, such declines could be masking the effect of increasing amounts of penalty interest on their overall interest revenues. Although revenues from penalty fees have grown, their effect on overall issuer profitability is less than the effect of income from interest or other factors. For example, we obtained information from a Federal Reserve Bank researcher with data from one of the credit card industry surveys that illustrated that the issuers’ cost of funds may be a more significant factor for their profitability lately. Banks generally obtain the funds they use to lend to others through their operations from various sources, such as checking or savings deposits, income on other investments, or borrowing from other banks or creditors. The average rate of interest they pay on these funding sources represents their cost of funds. As shown in table 4 below, the total cost of funds (for $100 in credit card balances outstanding) for the credit card banks included in this survey declined from $8.98 in 1990 to a low of $2.00 in 2004—a decrease of 78 percent. Because card issuers’ net interest income generally represents a much higher percentage of revenues than does income from penalty fees, its impact on issuers’ overall profitability is greater; thus the reduction in the cost of funds likely contributed significantly to the general rise in credit card banks’ profitability over this time. Although card issuer revenues from penalty fees have been increasing since the 1980s, they remain a small portion of overall revenues. As shown in table 4 above, our analysis of the card issuer data obtained from the Federal Reserve indicated that the amount of revenues that issuers collected from penalty fees for every $100 in credit card balances outstanding climbed from 69 cents to $1.40 between 1990 and 2004—an increase of 103 percent. During this same period, net interest income collected per $100 in card balances outstanding grew from $7.44 to $10.45—an increase of about 41 percent. However, the relative size of each of these two sources of income indicates that interest income is between 7 to 8 times more important to issuer revenues than penalty fee income is in 2004. Furthermore, during this same time, collections of annual fees from cardholders declined from $1.25 to 42 cents per every $100 in card balances—which means that the total of annual and penalty fees in 2004 is about the same as in 1990 and that this decline may also be offsetting the increased revenues from penalty fees. In addition to those named above, Cody Goebel, Assistant Director; Jon Altshul; Rachel DeMarcus; Kate Magdelena Gonzalez; Christine Houle; Christine Kuduk; Marc Molino; Akiko Ohnuma; Carl Ramirez; Omyra Ramsingh; Barbara Roesmann; Kathryn Supinski; Richard Vagnoni; Anita Visser; and Monica Wolford made key contributions to this report.
|
With credit card penalty rates and fees now common, the Federal Reserve has begun efforts to revise disclosures to better inform consumers of these costs. Questions have also been raised about the relationship among penalty charges, consumer bankruptcies, and issuer profits. GAO examined (1) how card fees and other practices have evolved and how cardholders have been affected, (2) how effectively these pricing practices are disclosed to cardholders, (3) the extent to which penalty charges contribute to cardholder bankruptcies, and (4) card issuers' revenues and profitability. Among other things, GAO analyzed disclosures from popular cards; obtained data on rates and fees paid on cardholder accounts from 6 large issuers; employed a usability consultant to analyze and test disclosures; interviewed a sample of consumers selected to represent a range of education and income levels; and analyzed academic and regulatory studies on bankruptcy and card issuer revenues. Originally having fixed interest rates around 20 percent and few fees, popular credit cards now feature a variety of interest rates and other fees, including penalties for making late payments that have increased to as high as $39 per occurrence and interest rates of over 30 percent for cardholders who pay late or exceed a credit limit. Issuers explained that these practices represent risk-based pricing that allows them to offer cards with lower costs to less risky cardholders while providing cards to riskier consumers who might otherwise be unable to obtain such credit. Although costs can vary significantly, many cardholders now appear to have cards with lower interest rates than those offered in the past; data from the top six issuers reported to GAO indicate that, in 2005, about 80 percent of their accounts were assessed interest rates of less than 20 percent, with over 40 percent having rates below 15 percent. The issuers also reported that 35 percent of their active U.S. accounts were assessed late fees and 13 percent were assessed over-limit fees in 2005. Although issuers must disclose information intended to help consumers compare card costs, disclosures by the largest issuers have various weaknesses that reduced consumers' ability to use and understand them. According to a usability expert's review, disclosures from the largest credit card issuers were often written well above the eighth-grade level at which about half of U.S. adults read. Contrary to usability and readability best practices, the disclosures buried important information in text, failed to group and label related material, and used small typefaces. Perhaps as a result, cardholders that the expert tested often had difficulty using the disclosures to find and understand key rates or terms applicable to the cards. Similarly, GAO's interviews with 112 cardholders indicated that many failed to understand key aspects of their cards, including when they would be charged for late payments or what actions could cause issuers to raise rates. These weaknesses may arise from issuers drafting disclosures to avoid lawsuits, and from federal regulations that highlight less relevant information and are not well suited for presenting the complex rates or terms that cards currently feature. Although the Federal Reserve has started to obtain consumer input, its staff recognizes the challenge of designing disclosures that include all key information in a clear manner. Although penalty charges reduce the funds available to repay cardholders' debts, their role in contributing to bankruptcies was not clear. The six largest issuers reported that unpaid interest and fees represented about 10 percent of the balances owed by bankrupt cardholders, but were unable to provide data on penalty charges these cardholders paid prior to filing for bankruptcy. Although revenues from penalty interest and fees have increased, profits of the largest issuers have been stable in recent years. GAO analysis indicates that while the majority of issuer revenues came from interest charges, the portion attributable to penalty rates has grown.
|
The concept of the COO/CMO position largely came out of the creation of performance-based organizations (PBO) in the federal government in the late 1990s and early in this decade. During that time, the administration and Congress renewed their focus on the need to restructure federal agencies and hold them accountable for achieving program results. To this end, three PBOs were established, which were modeled after the United Kingdom’s executive agencies. A PBO is a discrete departmental unit that is intended to transform the delivery of public services by having the organization commit to achieving specific measurable goals with targets for improvement in exchange for being allowed to operate without the constraints of certain rules and regulations to achieve these targets. The clearly defined performance goals are to be coupled with direct ties between the achievement of the goals and the pay and tenure of the head of the PBO, often referred to as the COO. The COO is appointed for a set term of typically 3 to 5 years, subject to an annual performance agreement, and is eligible for bonuses for improved organizational performance. With the backdrop of these PBOs and an ongoing focus on transforming organizational cultures in the federal government, we convened a roundtable of government leaders and management experts on September 9, 2002, to discuss the COO concept and how it might apply within selected federal departments and agencies. The intent of the roundtable was to generate ideas and to engage in an open dialogue on the possible application of the COO concept to selected federal departments and agencies. It was generally agreed at this roundtable discussion that the implementation of any approach should be determined within the context of the specific facts and circumstances that relate to each individual agency. Nonetheless, there was general agreement on the importance of the following actions for organizational transformation and management reform: Elevate attention on management issues and transformational change. Top leadership attention is essential to overcome organizations’ natural resistance to change, marshal the resources needed to implement change, and build and maintain the organizationwide commitment to new ways of doing business. Integrate various key management and transformation efforts. There needs to be a single point within agencies with the perspective and responsibility—as well as authority—to ensure the successful implementation of functional management and, if appropriate, transformational change efforts. Institutionalize accountability for addressing management issues and leading transformational change. The management weaknesses in some agencies are deeply entrenched and long-standing, and it can take at least 5 to 7 years of sustained attention and continuity to fully implement transformations and change management initiatives. In the time since the 2002 roundtable, legislative proposals have been introduced and are still pending in this Congress to establish CMO positions at DOD and DHS to help address transformation efforts at the two departments, both of which are responsible for various areas identified on our biennial update of high-risk programs. These legislative proposals differ somewhat in content but would essentially create a senior-level position to serve as a principal advisor to the secretary on matters related to the management of the department, including management integration and business transformation. Some of these legislative proposals also include specific provisions that spell out qualifications for the position, require a performance contract, and provide for a term appointment of 5 or 7 years. At the present time, no federal department has a COO/CMO-type position with all these characteristics. In August 2007, the proposal for the Undersecretary for Management position at DHS to become the CMO at an Executive Level II, but without a term appointment, was enacted into law. DOD issued a directive on September 18, 2007, that assigned CMO responsibilities to the current Deputy Secretary of Defense in addition to his other responsibilities. However, as I will discuss later in this statement, we do not believe that these actions go far enough. The heads of federal departments and selected agencies designate a COO, who is usually the deputy or another official with agencywide authority, to sit on the President’s Management Council. However, deputy secretaries and the other senior officials designated as COOs do not have all of the characteristics of a COO/CMO that I just described, including a term appointment and performance agreement. The council was created by President Clinton in 1993 in order to advise and assist the President and Vice President in ensuring that management reforms are implemented throughout the executive branch. The Deputy Director for Management of OMB chairs the council, and the council is responsible for improving overall executive branch management, including implementation of the President’s Management Agenda (PMA); coordinating management-related efforts to improve government throughout the executive branch and, as necessary, resolving specific interagency management issues; ensuring the adoption of new management practices in agencies throughout the executive branch; and identifying examples of, and providing mechanisms for, interagency exchange of information about best management practices. Because each agency has its own set of characteristics, challenges, and opportunities, the type of COO/CMO to be established in a federal agency should be determined within the context of the specific facts and circumstances surrounding that agency. Nevertheless, a number of criteria can be used to determine the type of COO/CMO position for an agency. These criteria are the agency’s history of organizational performance, such as the existence of long- standing management weaknesses and the failure rates of major projects or initiatives; degree of organizational change needed, such as the status of ongoing major transformational efforts and the challenge of reorganizing and integrating disparate organizational units or cultures; nature and complexity of mission, such as the range, risk, and scope of the agency’s mission; organizational size and structure, such as the number of employees, geographic dispersion of field offices, number of management layers, types of reporting relationships, and degree of centralization of decision making; and current leadership talent and focus, such as the extent of knowledge and the level of focus of the agency’s managers on management functions and change initiatives, and the number of political appointees in key positions. These five criteria are important for determining the appropriate type of COO/CMO position, which in turn can inform many other elements of the position, including roles and responsibilities, job qualifications, reporting relationships, and decision-making structures and processes. Based on these criteria, there could be several types of COO/CMO positions, including the following: The existing deputy position could carry out the integration and business transformation role. This type of COO/CMO might be appropriate in a relatively stable or small organization. A senior-level executive who reports to the deputy, such as a principal undersecretary for management, could be designated to integrate key management functions and lead business transformation efforts in the agency. This type of COO/CMO might be appropriate for a larger organization. A second deputy position could be created to bring strong focus to the integration and business transformation of the agency, while the other deputy position would be responsible for leading the operational policy and mission-related functions of the agency. For a large and complex organization undergoing a significant transformation to reform long- standing management problems, this might be the most appropriate type of COO/CMO. To address long-standing management and business transformation problems, we have long advocated that DOD and DHS could benefit from a senior-level COO/CMO position, with a term appointment of at least 5 to 7 years, and a performance agreement. We continue to identify DOD’s approach to business transformation and implementing and transforming DHS on GAO’s biennial high-risk list of programs. DOD dominates our list of agencies with high-risk programs designated as vulnerable to waste, fraud, and abuse of funds, bearing responsibility, in whole or in part, for 15 of 27 high-risk areas. While DOD has recently designated the current DOD Deputy Secretary as the CMO in addition to his other responsibilities, we believe this action does not go far enough to change the status quo and ensure sustainable success of the overall business transformation effort within the department. We recognize the commitment and elevated attention that the current Deputy Secretary of Defense and other senior leaders have clearly shown in addressing deficiencies in the department’s business operations. For example, the Deputy Secretary has overseen the creation of various business-related entities, such as the Defense Business Systems Management Committee and the Business Transformation Agency, and has been closely involved in monthly meetings of both the Defense Business Systems Management Committee and the Deputy’s Advisory Working Group, a group that provides departmentwide strategic direction on various issues. In our view, subsuming the duties within the responsibilities of the individual currently serving as the Deputy Secretary largely represents a continuation of the status quo and will not provide full-time attention or continuity as administrations change. While the Deputy Secretary may be at the right level, the substantial demands of the position make it exceedingly difficult for the incumbent to maintain the focus, oversight, and momentum needed to resolve business operational weaknesses, including the many high-risk areas within DOD. Furthermore, the assignment of CMO duties to an individual with a limited term in the position does not ensure continuity of effort or sustained success within and across administrations. We continue to believe a CMO position should be codified in statute as a separate position, at the right level, and with the appropriate term in office. In fact, consensus exists among GAO’s work and other studies (e.g. the Defense Business Board and the Institute for Defense Analysis),that DOD needs a full-time senior management official with a term appointment to provide focused and sustained leadership over business transformation efforts. Additionally, DHS is experiencing particularly significant challenges in integrating its disparate organizational cultures, and multiple management processes and systems, which make it an appropriate candidate for a COO/CMO as a second deputy position or alternatively as a principal undersecretary for management position. Designating the Undersecretary for Management at DHS as the CMO at an Executive Level II is a step in the right direction, but this change does not go far enough. A COO/CMO for DHS with a limited term that does not transition across administrations will not help to ensure the continuity of focus and attention needed to protect the security of our nation. DHS faces significant management and organizational transformation challenges as it works to protect the nation from terrorism and simultaneously establish itself. DHS must integrate approximately 180,000 employees from 22 originating agencies, consolidate multiple management systems and processes, and transform into a more effective organization with robust planning, management, and operations. However, DHS continues to lack not only a comprehensive management integration strategy with overall goals and a timeline, but also a dedicated team with the authority and responsibility to help develop and implement this strategy. A COO/CMO at the appropriate organizational level at DHS, with a term appointment, would provide the elevated senior leadership and concerted and long-term attention required to marshal this effort. Once the type of COO/CMO is determined, the following six key strategies can be useful in implementing COO/CMO positions in federal agencies, including making sure that the COO/CMO has a sufficiently high level of authority and continuity in the position: Define the specific roles and responsibilities of the COO/CMO position. For carrying out the role of management integration, it should be clear which of the agency’s key management functions are under the direct purview of the COO/CMO. Depending on the agency, the COO/CMO might have responsibility for human capital, financial management, information resources management, and acquisition management as well as other management functions in the agency, such as strategic planning, program evaluation, facilities and installations, or safety and security, as was the case with the four organizations we reviewed. As the COO/CMO is a leader of business transformation in the organization, it should likewise be clear which major change efforts are the direct responsibility of the COO/CMO. Once clearly defined, these specific roles and responsibilities should be communicated throughout the organization. Ensure that the COO/CMO has a high level of authority and clearly delineated reporting relationships. The COO/CMO concept is consistent with the governance principle that there needs to be a single point within agencies with the perspective and responsibility to ensure the successful implementation of functional management and business transformation. The organizational level and span of control of the COO/CMO position is crucial in affecting the incumbent’s authority and status within the organization. At both IRS and MIT, the COO/CMO reports to the head of the organization (i.e., second-level reporting position), and at Justice and Treasury, the COO/CMO reports through the deputy secretary (i.e., third-level reporting position). Although our interviews and the forum discussion uncovered differing views about the appropriate level and reporting relationships for a COO/CMO position, it was broadly recognized that any COO/CMO should have the high level of authority needed to ensure the successful implementation of functional management and business transformation efforts in the agency. Foster good executive-level working relationships for maximum effectiveness. Effective working relationships of the COO/CMO with the agency head and his or her peers can help greatly to ensure that the people, processes, and technology are well-aligned in support of the agency’s mission. For example, officials at IRS stressed the importance of the working relationship between the agency’s two deputy commissioners—one serving as the COO/CMO—in carrying out their respective roles and responsibilities in leading the mission and mission support offices of the agency. Establish integration and transformation structures and processes in addition to the COO/CMO position. While the position of COO/CMO can be a critical means for transforming and integrating business and management functions, other structures and processes need to be in place to support the COO/CMO in business transformation and management integration efforts across the organization. These structures and processes can include business transformation offices, senior executive committees, functional councils, and crosscutting teams that are actively involved in strategic planning, budgeting, performance monitoring, information sharing, and decision making. To bring focus and direction and help enforce decisions in the agency, the COO/CMO should be a key player in actively leading or supporting these integration structures and processes. Promote individual accountability and performance through specific job qualifications and effective performance management. A specific set of job qualification standards could aid in ensuring that the incumbent has the necessary knowledge and experience. Our interviews at the four organizations revealed that essential qualifications for a COO/CMO position include having broad management experience and a proven track record of making decisions in complex settings as well as having direct experience in, or solid knowledge of, the respective department or agency, but there were varying views as to whether qualifications should be statutory. To further clarify expectations and reinforce accountability, a clearly defined performance agreement with measurable organizational and individual goals would be warranted as well. Such agreements should contain clear expectations as well as appropriate incentives and rewards for outstanding performance and consequences for those who do not perform. Provide for continuity of leadership in the COO/CMO position. The administration and Congress could also consider options of other possible mechanisms to help agencies in maintaining leadership continuity for the COO/CMO position, such as term and career appointments, because organizational results and transformational efforts can take years to achieve. I share your concern about leadership continuity particularly for those DOD and DHS programs that we consider to be high risk as the administration heads for a presidential transition in early 2009. Foremost, an agency needs to have an executive succession and transition planning strategy that ensures a sustained commitment and continuity of leadership as individual leaders arrive or depart or serve in acting capacities. The administration and Congress could also consider other possible mechanisms to help agencies in maintaining leadership continuity for the position. For example, the benefits of a 5- to 7-year term appointment for the position, such as instilling a long-term focus, need to be weighed along with the potential challenges of a term appointment, such as a lack of rapport between members of a new senior leadership team with any change in administration. Term appointments for key leadership positions already exist at a number of agencies. (Attachment II provides a list of term appointments at a variety of U.S. agencies.) Moreover, as emphasized in our interviews and in the forum discussion, the appointment of career civil servants to the COO/CMO position could be considered when assessing the position’s roles, responsibilities, and reporting relationships. High turnover among politically appointed leaders in federal agencies can make it difficult to follow through with organizational transformation because of the length of time often needed to provide meaningful and sustainable results. As Congress considers COO/CMO positions for federal agencies, the criteria and strategies we identified should help to highlight key issues that need to be considered, both in design of the positions and in implementation. While Congress is currently focused on two of the most challenging agencies—DOD and DHS—the problems they face are, to varying degrees, shared by the rest of the federal government. Each agency, therefore, should consider the type of COO/CMO that would be appropriate for its organization, either by designating an existing position as the COO/CMO or creating a new position, and adopt the strategies we outline to implement such a position. Because it is composed of the senior management officials in each department and agency, we recommend in the report being released today that the President’s Management Council, working closely with OMB, play a role in leading such an assessment and helping to ensure that due consideration is given to how each agency can improve its leadership structure for management. Moreover, given the council’s charter to oversee government management reforms, it can help institutionalize a leadership position that will be essential to overseeing current and future reform efforts. Recent legislative proposals have called for certain features of the COO/CMO position that we have endorsed, including a direct reporting relationship to the departmental secretary, responsibility for integrating key management functions and overseeing overall business transformation efforts, the requirement for a performance agreement, and the designation of a term appointment. We are suggesting that Congress consider the criteria and strategies that I have discussed today as it continues to develop and review legislative proposals for the appropriate type of COO/CMO positions for all major federal agencies, recognizing that the implementation of any approach should be determined within the context of the specific facts and circumstances that relate to each agency. Mr. Chairman and members of the subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. For further information on this testimony, please contact Bernice Steinhardt, Director, Strategic Issues, at (202) 512-6806 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Sarah Veale, Assistant Director; K. Scott Derrick; and Katherine Wulff. Appointed by the FAA Administrator, with the approval of the Air Traffic Services Committee. There is no statutory provision on reappointment of the officeholder. The COO is to serve at the pleasure of the Administrator, and the Administrator is to make every effort to ensure stability and continuity in the leadership of the air traffic control system. Appointments to fill a vacancy occurring before the expiration of term shall be only for the remainder of that term. Appointed by the President, following recommendations from a special congressional commission, and confirmed by the Senate. There are no statutory conditions on the authority of the President to remove the officeholder. No statutory provision. May be appointed to more than one 10- year term. Appointed by the President with the advice and consent of the Senate. There are no statutory conditions on the authority of the President to remove the officeholder. The officeholder may not be reappointed. No statutory provision. Appointed by the President with the advice and consent of the Senate. President may remove members for cause. There is no statutory limitation on a Chairman serving more than one 4-year term. An individual appointed to fill a vacancy among the seven members of the board shall hold office only for the unexpired term of his or her predecessor. Appointed by the Secretary of Education. May be reappointed by the Secretary to subsequent terms of 3 to 5 years as long as the incumbent’s performance is satisfactory per required annual performance agreement. The COO may be removed by the President or by the Secretary for misconduct or failure to meet performance goals set forth in the performance agreement. The President or the Secretary must communicate the reasons for any such removal to the appropriate committees of Congress. No statutory provision. Appointed by the President, following recommendations from a special congressional commission, and confirmed by the Senate. The officeholder is limited to a single 15-year term. The Comptroller General may be removed by impeachment or by adoption of a joint resolution of Congress. Removal by joint resolution can occur only after notice and an opportunity for a hearing and only for certain specified reasons: permanent disability, inefficiency, neglect of duty, malfeasance, felony, or conduct involving moral turpitude. No statutory provision. Appointed by the President with the advice and consent of the Senate. There are no statutory conditions on the authority of the President to remove the officeholder. May be appointed to more than one 5- year term. Appointments to fill a vacancy occurring before the expiration of term shall be only for the remainder of that term. Appointed by the President with the advice and consent of the Senate. There are no statutory conditions on the authority of the President to remove the officeholder. There is no statutory provision on reappointment of the officeholder. No statutory provision. Appointed by the President with the advice and consent of the Senate. May be removed by the President for reasons to be communicated by him or her to the Senate. There is no statutory provision on reappointment of the officeholder. No statutory provision. Appointed by the President with the advice and consent of the Senate. There are no statutory conditions on the authority of the President to remove the officeholder. There is no statutory provision on reappointment of the officeholder. Appointments to fill a vacancy occurring before the expiration of a term shall be appointed only for the remainder of that term. Appointed by the President with the advice and consent of the Senate. The officeholder may be removed only pursuant to a finding by the President of neglect of duty or malfeasance in office. There is no statutory provision on reappointment of the officeholder. Appointments to fill a vacancy occurring before the expiration of a term shall be appointed only for the remainder of that term. Appointed by the Secretary of Commerce. May be reappointed to subsequent terms by the Secretary as long as the incumbent’s performance is satisfactory per required annual performance agreement. The Secretary may remove the Commissioner for misconduct or unsatisfactory performance under the required performance agreement. The Secretary must provide notification of any such removal to both Houses of Congress. No statutory provision. Appointed by the Secretary of Commerce. May be reappointed to subsequent terms by the Secretary as long as the incumbent’s performance is satisfactory per required annual performance agreement. The Secretary may remove the Commissioner for misconduct or unsatisfactory performance under the required performance agreement. The Secretary must provide notification of any such removal to both Houses of Congress. No statutory provision. Executive Order No. 13180 (Dec. 7, 2000) established the Air Traffic Organization within FAA and gave responsibility to head the Air Traffic Organization to the Chief Operating Officer for the Air Traffic Control System of FAA, a position created pursuant to Pub. L. No. 106-181 (Apr. 5, 2000). Members of the Federal Reserve Board, including the Chairman, serve terms of 14 years from the expiration of the terms of their predecessors. The Chairman’s term is 4 years. The 4-year term does not have to coincide with the President’s term in office. An individual may continue to serve after the expiration of his or her term until a successor is appointed. An individual may continue to serve after the expiration of his or her term until a successor enters office. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
As agencies across the federal government embark on large-scale organizational change needed to address 21st century challenges, there is a compelling need for leadership to provide the continuing, focused attention essential to completing these multiyear business-related transformations. At the same time, many agencies are suffering from a range of long-standing management problems that are undermining their ability to accomplish their missions and achieve results. One proposed approach to address these challenges is to have COO/ CMO positions in federal agencies. This statement is mostly drawn from GAO's report released today (GAO-08-34) that discusses criteria that can be used to determine the type of COO/CMO that ought to be established in federal agencies and strategies for implementing these positions. To do this, GAO reviewed four organizations with COO/CMO-type positions and convened a forum. GAO also discusses previous GAO work on DOD and DHS. GAO's report recommends that the Office of Management and Budget (OMB), working with the President's Management Council use the identified criteria when assessing the type of COO/CMO positions appropriate for federal agencies and the strategies for implementing these positions. Also, GAO suggests that Congress consider these criteria and strategies as it develops and reviews legislative proposals for these positions. GAO has long advocated that the Department of Defense (DOD) and the Department of Homeland Security (DHS) could benefit from a full-time and senior-level chief operating officer (COO)/chief management officer (CMO) position, with a term appointment of at least 5 to 7 years, and a performance agreement. In fact, every federal agency can benefit from a senior leader acting as a COO/CMO. While the type of COO/CMO may vary depending on the characteristics of the organization, a number of criteria can be used to determine the appropriate type of COO/CMO position in a federal agency. These criteria include the history of organizational performance, degree of organizational change needed, nature and complexity of mission, organizational size and structure, and current leadership talent and focus. For example, the existing deputy position could carry out the integration and business transformation role--this type of COO/CMO might be appropriate in a relatively stable or small organization. Or, a second deputy position could be created to bring strong focus to the integration and business transformation of the agency. This might be the most appropriate type of COO/CMO for a large and complex organization undergoing a significant transformation to reform long-standing management problems. GAO identified six key strategies that agencies can follow in implementing COO/CMO positions in federal agencies. However, the implementation of any one approach should be determined within the context of the agency's specific facts and circumstances.
|
JWST is a large deployable, infrared-optimized space telescope intended to be the scientific successor to the aging Hubble Space Telescope. JWST is designed for a 5-year mission to find the first stars and trace the evolution of galaxies from their beginning to their current formation, and is intended to operate in an orbit approximately 1.5 million kilometers—or 1 million miles—from the Earth. With its 6.5-meter primary mirror, JWST will be able to operate at 100 times the sensitivity of the Hubble Space Telescope. A tennis-court-sized sunshield will protect the mirrors and instruments from the sun’s heat to allow the JWST to look at very faint infrared sources. The Hubble Telescope operates primarily in the visible and ultraviolet regions of the electromagnetic spectrum. The observatory segment of JWST includes several major subsystems.subsystems are being developed through a mixture of NASA, contractor, and international partner efforts. See figure 1. The Mid-Infrared Instrument (MIRI)—one of JWST’s four instruments in the Integrated Science Instrument Module (ISIM)—requires a dedicated, interdependent two-stage cooler system designed to bring the optics to the required temperature of 6.7 Kelvin (K), just above absolute zero. This system is referred to as a cryocooler. See figure 2 for a depiction of the cooling system on JWST. The cryocooler moves helium gas through 10 meters (approximately 33 feet) of refrigerant lines from the sun-facing surface of the JWST observatory to the colder shaded side where the ISIM is located. According to NASA officials, a cooler system of this configuration, with so much separation between the beginning and final cooling components, has never been developed or flown in space before. Project officials stated that the MIRI cryocooler is particularly complex and challenging because of this relatively great distance between cooling components located in different temperature regions of the observatory and the need to overcome multiple sources of unwanted heat through the regions before the system can cool MIRI. Specifically, the cooling components span temperatures ranging from approximately 300K (about 80 degrees Fahrenheit, or room temperature) where the spacecraft is located on the sun-facing surface of the telescope to approximately 40K (about -388 degrees Fahrenheit) within the ISIM. Since entering development in 1999, JWST has experienced significant schedule delays and increases to project costs. Prior to being approved for development, cost estimates of the project originally ranged from $1 billion to $3.5 billion with expected launch dates ranging from 2007 to 2011. In March 2005, NASA increased the JWST’s life-cycle cost estimate to $4.5 billion and delayed the launch date to 2013. We reported in 2006 that the cost growth was due to a delay in launch vehicle selection, budget limitations in fiscal years 2006 and 2007, requirements changes, and an increase in the project’s reserve funding—funding used to mitigate issues that arise but which were previously unknown. In April 2006, an Independent Review Team confirmed that the project’s technical content was complete and sound, but expressed concern over the project’s reserve funding, reporting that it was too low and phased in too late in the development lifecycle. The review team reported that for a project as complex as JWST, a 25 to 30 percent total reserve funding was appropriate. The team cautioned that low reserve funding compromised the project’s ability to resolve issues, address risk areas, and accommodate unknown problems. The project was baselined in April 2009 with a life-cycle cost estimate of $4.964 billion—including additional cost reserves—and a launch date in June 2014. Shortly after JWST was approved for development and its cost and schedule estimates were baselined, project costs continued to increase and the schedule was extended. In response to a request from the Chair of the Senate Subcommittee on Commerce, Justice, Science, and Related Agencies to the NASA Administrator for an independent review of JWST—stemming from the project’s cost increases and reports that the June 2014 launch date was in jeopardy—NASA commissioned the Independent Comprehensive Review Panel (ICRP). In October 2010, the ICRP issued its report and cited several reasons for the project’s problems including management, budgeting, oversight, governance and accountability, and communication issues. The panel concluded JWST was executing well from a technical standpoint, but that the baseline funding did not reflect the most probable cost with adequate reserves in each year of project execution, resulting in an unexecutable project. Following this review, the JWST program underwent a replan in September 2011 and was reauthorized by Congress in November 2011, which placed an $8 billion cap on the formulation and development costs for the project. On the basis of the replan, NASA announced that the project would be rebaselined with a life- cycle cost at $8.835 billion—a 78 percent increase—and would launch in October 2018—a delay of 52 months. The revised life-cycle cost estimate included 13 months of funded schedule reserve. In the President’s Fiscal Year 2013 budget request, NASA reported a 66 percent joint cost and schedule confidence level associated with these estimates. A joint cost and schedule confidence level, or JCL, is the process NASA uses to assign a percentage to the probable success of meeting cost and schedule targets and is part of the project’s estimating process. Figure 3 shows the original baseline schedule and the revised 2011 baseline for JWST. As part of the replan in 2011, JWST was restructured and is now a single project program reporting directly to the NASA Associate Administrator for programmatic oversight and to the Associate Administrator for the Science Mission Directorate for technical and analysis support. Goddard Space Flight Center is the NASA center responsible for the management of JWST. See figure 4 for the current JWST organizational chart. In 2012, we reported on numerous technical challenges and risks the project was facing. For example, a combination of numerous instrument delays and leaks in the cryocooler’s bypass valves resulted in the use of 18 of ISIM’s 26 months of schedule reserve and the potential for more schedule reserve to be consumed. Additionally, we identified that the current JWST schedule reserve lacked flexibility for the last three integration and testing events (OTIS, the spacecraft, and observatory), planned for April 2016 through May 2018. While there was a total of 14 months of schedule reserve for all five integration and test events—when problems are more likely to be found—only 7 months were likely to be available for these last three efforts. We also reported that the spacecraft exceeded the mass limit for its launch vehicle and that project officials were concerned about the mass of JWST since the inception of the project because of the telescope’s size and limits of the launch vehicle. In addition to these technical challenges, we reported that the lack of detail in the summary schedule used for JWST’s JCL analysis during the 2011 replan prevented us from sufficiently understanding how risks were incorporated, calling into question the results of that analysis and, therefore, the reliability of the replanned cost estimate. In our December 2012 report, we made numerous recommendations focused on providing high-fidelity cost information for monitoring project progress and ensuring technical risks and challenges were being effectively managed and sustaining oversight. One recommendation was that the project should perform an updated integrated cost/schedule risk, or JCL, analysis. In addition, we recommended that the JWST project conduct a separate review to determine the readiness to conduct integration and test activities prior to the beginning of the OTIS and spacecraft integration and test efforts. NASA concurred with these two recommendations. The JWST project is generally executing to its September 2011 revised cost and schedule baseline. Through the administration’s annual budget submissions, NASA has requested funding for JWST that is in line with the rebaseline plan and the project is maintaining 14 months of schedule reserve to its October 2018 launch date. Cumulative performance data from the prime contractor, which is responsible for more than 40 percent of JWST’s remaining $2.76 billion in development costs, indicate that work is being accomplished on schedule and at the cost expected. Monthly cost and schedule metrics, however, indicate that this performance has been declining since early 2013. The JWST project is maintaining oversight established as part of the replan, for example, by continuing quarterly NASA and contractor management meetings and instituting a cost and schedule tracking tool for internal efforts. The project, however, is not planning to perform an updated integrated cost and schedule risk analysis, which would provide management and stakeholders with information to continually gauge progress against the baseline estimates. The JWST project is executing to the cost commitment agreed to during the September 2011 rebaseline. Since that time, NASA’s funding requests for JWST have been consistent with the budget profile of the new cost rebaseline. For fiscal year 2013, the funding the project received—almost $628 million—matched the agency’s budget request. In addition, the project has been able to absorb cost increases on various subsystems through the use of its cost reserves. Project officials remain confident that they can meet their commitments, and stay within an $8 billion development cost cap recommended by congressional conferees, if funding is provided as agreed during the replan. Performance data from contractors show that planned work was generally being performed within expected costs, but performance has declined over the past year. The project collects earned value management (EVM) cost data from several of its major contractors and subcontractors. EVM data for Northrop Grumman—the project’s prime contractor which is responsible for more than 40 percent of the remaining development costs—indicates that cumulatively from May 2011 work planned is being performed at the cost expected. This measure, known as the cumulative cost performance index (CPI), provides an indication of how a contractor has performed over an extended period of time. The CPI indicates that until June 2013 the contractor performed slightly more work for the cost incurred than what was expected. Recent monthly performance, however, has begun to lower the cumulative index. From December 2012 until June 2013, monthly CPI data, which gives an indication of current performance, show that the contractor has been accomplishing less work than planned for the cost incurred. See figure 5. Although several subsystems are experiencing positive performance, cost overruns on spacecraft-related development activities are contributing to this recent trend. For example, Northrop Grumman has reported negative performance within the spacecraft systems engineering and the electrical power subsystems activities for a 6-month period as of the end of June 2013. We calculate that this contract, which is approximately two-thirds complete, could experience a slight cost overrun based on current data. Northrop Grumman is using cost management reserves to offset the decline in performance, but the JWST project reports that Northrop Grumman is consuming cost reserves at a rate faster than planned. Contractor EVM cost data for ITT/Exelis—which is providing services related to the OTE and OTIS integration and test efforts—also indicate that in recent months the contractor has been accomplishing less work than planned for the cost incurred. ITT/Exelis has experienced cost overruns in each month from March through June 2013, which has lowered the cumulative CPI to 0.98. Project officials told us that the ITT/Exelis has sufficient cost reserves to offset the recent cost overruns and that a cumulative CPI of 0.98 is within the range of acceptable performance. Best practices indicate that a CPI of 1.0 or above is favorable. We found small cost overruns across many elements of the work being performed by ITT/Exelis, similar to the analysis performed by the project. Based on our analysis of EVM data through the end of July 2013, we estimate that this contract could experience a small cost overrun. As of July 2013, ITT/Exelis had completed a little more than one- third of the planned work for this contract and used more than 44 percent of available management reserves from October 2012 to July 2013. In addition to the work being performed by contractors, the JWST project also performs development work internally at NASA’s Goddard Space Flight Center. For example, the project internally manages the ISIM development effort that is expected to cost over $1 billion, which includes the first of five major integration and test efforts. The current estimated cost at completion for ISIM as calculated by the project has risen more than $109 million—a 9.8 percent increase—since the 2011 rebaseline of the project. The cost overrun is primarily because of late instrument deliveries and is being accommodated through the use of project reserves. The JWST project is executing to the baseline schedule commitment agreed to during the September 2011 rebaseline. The JWST project continues to report 14 months of schedule reserve to its October 2018 launch date, pending a review of the need to use schedule reserve based on the impacts of the government shutdown in October 2013. See figure 6. We found in 2012 that the 7 months of schedule reserve held by the OTE subsystem will likely be used during its integration and test, prior to delivery to OTIS. If the OTE integration and test effort uses schedule reserve beyond those 7 months, it will reduce the amount of schedule reserve available for the last three integration and test efforts. Northrop Grumman officials said that the OTE integration and test effort is very sequential and does not offer much flexibility to allow for changes to the process flow. The integration and test of OTE must be complete for the OTIS integration effort to begin on schedule. In December 2013, the project indicated that the 14 months of total schedule reserve held by the project was being assessed due to delivery problems with portions of the observatory’s sunshield and the impact of the government shutdown. Because of instrument and hardware delays and non-availability of a test chamber, the project now reports 7 months of schedule reserve associated with the ISIM integration and test effort before it is needed for integration with the OTE subsystem to form OTIS. Previously, the project reported that ISIM had almost 8 months of schedule reserve, which did not account for the delayed start of the first scheduled cryo-vacuum test— in which a test chamber is used to simulate the near-absolute zero temperatures in space. The current 7 months of schedule reserve for the ISIM integrations and test effort does not include the impact of any potential delays due to the government shutdown in October 2013, which was still being determined in mid-December 2013. The first cryo-vacuum test was considered a risk reduction test by the project because it did not include two of the project’s four instruments and was to test procedures and the ground support equipment to be used in later cryo-vacuum tests of ISIM. During the replan, this test was scheduled to begin in February 2013, but was delayed until August 2013 because of several issues, including availability of the test chamber and delays in development and delivery of a radiator for the harness that holds electrical wiring. Project officials said they will adjust the ISIM schedule to minimize the schedule impact by performing some activities concurrently, delaying some activities until after the first cryo-vacuum test, and removing some activities. They added that a recently approved September 2013 revision to the ISIM schedule only reduced schedule reserve by 1 week and no additional risk will be incurred based on these changes to the ISIM schedule. The two subsequent cryo-vacuum tests, however, have slipped up to 2 months in the latest revision to the ISIM schedule, although project officials state that the April 2016 completion date for ISIM testing and delivery to the OTIS integration and test effort remains unchanged. According to the JWST program manager, however, the first cryo-vacuum test was in process when the government shutdown happened and, although many of the testing goals were accomplished through prioritization of test activities, the test was terminated once the ISIM staff resumed work and some activities were not accomplished. As a result, he said that the project would incur more risk in the second cryo-vacuum test that is currently scheduled to start in April 2014. In addition to maintaining up to 14 months of schedule reserve, the project is generally meeting the milestones it reports to Congress and other external entities. See table 1. These milestones include technical reviews prior to the spacecraft critical design review, hardware tests, and the delivery of key pieces of hardware. As shown in the table, the project has completed the majority of its milestones as planned and has deferred six milestones in the past 2 fiscal years. Among the deferred milestones are delays to completion of the first ISIM cryo-vacuum test and delivery of flight hardware for the MIRI instrument cryocooler. EVM schedule data for Northrop Grumman indicates that the cumulative planned work since the new schedule estimate was agreed upon is being performed as expected. This measure, known as the cumulative schedule performance index (SPI), shows consistent performance at the aggregate level for the past year. However, monthly SPI metrics indicate a slight decline in performance in 9 of the 12 months between August 2012 and July 2013. See figure 7. The data from Northrop Grumman in recent months indicates that work is slightly behind schedule for the spacecraft subsystem. The JWST project has maintained the oversight activities put in place following the replan and added additional oversight mechanisms. For example, some of the oversight activities implemented as part of the 2011 replan that are still ongoing include The JWST Program Director is holding monthly meetings with the The JWST Program Director is holding quarterly meetings with Northrop Grumman senior management and the Goddard Space Flight Center Director, and The JWST Project Spacecraft Manager has relocated to provide an on-site presence at the Northrop Grumman facility. The project also has implemented some new oversight mechanisms since the time of our last review in 2012, according to JWST officials. For example, the project is implementing a tool to continually update the cost estimate for the internal work on the ISIM development activities. In addition, the project is working with the Space Telescope Science Institute to design a tool, similar to EVM, to monitor progress on ground systems development. The project also has added a financial analyst at the Northrop Grumman facility to provide the spacecraft manager and the project ongoing and increased financial insight of the work being performed by Northrop Grumman and to analyze monthly data prior to the monthly project business meetings with the contractor. In response to our prior recommendation, the project has modified its schedule to add an independent review prior to the beginning of the OTIS and spacecraft integration and test efforts. Despite these improvements in oversight, JWST project officials said that they are not planning to perform an updated integrated cost/schedule risk analysis—or joint cost and schedule confidence level (JCL) analysis as GAO’s cost estimating best practices call for we recommended in 2012. a risk analysis and risk simulation exercise—like the JCL analysis—to be conducted periodically through the life of a program, as risks can materialize or change throughout the life of a project.updated on a regular basis, the cost estimate cannot provide decision makers and stakeholders with accurate information to assess the current status of the project. As we recommended in 2012, updating the project’s JCL would provide high-fidelity cost information for monitoring project progress. While NASA concurred with our recommendation, project Unless properly officials have subsequently stated that they do not plan to conduct an updated JCL. A program official stated that the project performs monthly integrated programmatic and cost/schedule risk analyses using various tools and that the information that these tools provide is adequate for their needs. For example, the JWST project conducts on-going risk identification, assigning probability and dollar values to the risks, tracks actual costs against planned costs to assess the viability of current estimates, uses earned value management, and performs schedule analyses. Moreover, while the JWST program manager acknowledged that NASA concurred with our recommendation, he said that the agency interpreted that it would be sufficient to do these lower level analyses instead of performing an updated JCL. NASA, however, has not addressed the shortcomings of the schedule that supports the baseline itself. For example, we found that the lack of detail in the summary schedule used for JWST’s last JCL in May 2011 prevented us from sufficiently understanding how risks were incorporated, therefore calling into question the results of that analysis. Since the JCL was a key input to the decision process of approving the project’s new cost and schedule baseline estimates, we maintain that the JWST project should perform an updated JCL analysis using a schedule that should now be much more refined and accurate and has sufficient detail to map risks to activities and costs in addition to the other analyses they currently perform. Doing so could help increase the reliability of the cost estimate and the confidence level of the JCL. Furthermore, risk management is a continuous process that constantly monitors a project’s health. The JWST project is still executing to a plan that was based on the JCL performed in May 2011. The risks the project is currently facing are different than those identified during the JCL process more than 2 years ago, and will likely continue to evolve as JWST is still many years from launch. The JWST project has made progress in addressing some technical risks; however, other technical challenges exist that have caused development delays and cost increases at the subsystem level. The project and its contractors have nearly addressed a problematic valve issue in the MIRI cryocooler that has been a concern for several years, the OTE and ISIM development efforts have made progress over the past year, and both the project and contractors have remedied the spacecraft mass issue that we reported on last year. The project has other technical issues, however, that still need to be resolved. For example, there is a separate and significant performance issue with the cryocooler and though project officials state that they understand the issue, the subcontractor is still working to validate the changes made to the cryocooler to address the issue. These issues with the cryocooler have led to an increase of about 120 percent in cryocooler contract costs and the execution of the remaining cryocooler effort will be challenging. In addition, the OTE and ISIM efforts are still addressing risks that threaten their schedules. Despite progress in some areas, the cryocooler development effort has been and remains a technical challenge for the project. The cryocooler subcontractor has addressed much of the valve leak issue that we reported on in 2012, and all but the last of the replacement valves, which were produced with new seal materials, have successfully completed testing. While resolution of this issue will be a positive step for the project, other, still unresolved issues with the cryocooler have arisen that have required additional cost and schedule resources to address. Specifically, a key component of the cryocooler underperformed prior tests of this technology by about 30 percent. In addition, both the Jet Propulsion Laboratory (JPL)—which awarded the cryocooler subcontract—and the subcontractor were focused on addressing the valve issue, which limited their attention to the cooling underperformance issue. In late 2012, the cryocooler subcontractor reported that it would be unable to meet the cryocooler schedule. The subcontractor is working toward a revised test schedule, agreed upon in April 2013, which delays acceptance testing and includes concurrent testing of hardware. In August 2013, the cryocooler subcontract was modified to reflect a 69 percent cost increase. Additionally, the number of subcontractor staff assigned to the cryocooler subcontract has increased from 40 to approximately 110, which accounts for a significant portion of the cost increase. This was the second time in less than 2 years that the cryocooler subcontract was modified. Cumulatively, the cryocooler subcontract value has increased by about 120 percent from March 2012. Various issues may have contributed to the current problems with the cryocooler. For example, according to project and JPL officials they had not verified the cryocooler cost and schedule estimates provided by the subcontractor prior to the project establishing new baseline cost and schedule estimates in 2011. Doing so may have allowed them to ensure adequate resources were accounted for in the new baseline estimates. JPL officials stated that the subcontractor proposal was verified prior to the completion of the March 2012 cryocooler replan. The subcontractor, however, reported that the 2012 replan did not include cost or schedule allowance for rework should additional problems arise, which did happen. In addition, despite erratic and negative EVM data from the subcontractor immediately following the March 2012 cryocooler replan, an in-depth review was not initiated until 9 months later by the cryocooler subcontractor. JPL officials stated that, during this time, they were performing analysis of the EVM data and the technical progress of the subcontractor and provided the results of their analysis to the project. Finally, the project had not followed key best practices since early in development, which left it at an increased risk of cost and schedule delays. For example, best practices call for testing of a model or prototype of a critical technology in its flight-like form, fit, and function and in a simulated realistic or high fidelity lab environment by its preliminary design review. While the subcontractor tested a demonstration model of the cryocooler in such an environment and the project assessed the technology as mature in 2008, a project official acknowledges that the demonstration model’s mechanical design was different than what would be used in space and, according to that official, those differences led to the loss of performance between the demonstration model and the current cryocooler. In addition, only 60 percent of the cryocooler’s expected design drawings were released as of the mission critical design review—well below the best practice standard of 90 percent drawings released by critical design review—indicating that the project moved forward without a stable cryocooler design as well as an immature cryocooler technology, which increases risk. The execution of the remaining cryocooler schedule will continue to be challenging as the performance issue is not resolved, the revised schedule is optimistic, the subcontractor has identified significant risks not incorporated in the rebaseline, and there are risks associated with the revised testing approach. The cryocooler subcontractor has developed a separate verification model, which is now being used to validate that the cryocooler redesign will address the underperformance. This step is important because, according to the cryocooler subcontractor program manager, the internal structures of the cryocooler component are intricate and once a unit is completed the internal structure cannot be modified. Thus, when issues arise, such as use of incorrect parts or unexpected underperformance, a new unit must be built rather than simply changing parts on the underperforming cryocooler component. Testing of the verification model, which will give an indication of whether the performance issue has been rectified and a new flight model can be built, was scheduled to be complete in October 2013, but has been delayed. The subcontractor project manager reports that issues were found with processes used to assemble the verification model that must be resolved before testing resumes, which is not expected until at least late December 2013. This delay may reduce the amount of schedule margin available to the overall cryocooler effort. The cryocooler schedule—agreed upon in April 2013—was optimistic, according to the cryocooler subcontractor program manager. Shortly after the new schedule was put in place, he told us that he had low confidence that the subcontractor would be able to meet this schedule based on the development issues mentioned above. In addition, the JPL scheduler for the cryocooler said that he had only moderate confidence of the subcontractor’s ability to meet this schedule. In line with their concerns, the cryocooler subcontractor recently depleted all of its schedule reserve for deliveries to JPL prior to the start of acceptance testing. The cryocooler subcontractor also identified other risks that could impact its execution of the subcontract, but that were not included as part of the rebaseline plan in the modified subcontract. The project retained financial responsibility for addressing those risks, should they arise, at the project level by identifying over $8 million in cost reserves in fiscal years 2014 and 2015. However, some of these risks could require significantly more than $8 million to address. For example, the cryocooler subcontractor program manager stated that some of these risks, if realized, could take a year to mitigate. As of September 2013, delivery dates agreed to in April 2013 for all of the major flight and spare cryocooler components have been delayed, all six weeks of schedule reserve being held at the cryocooler subcontractor had been exhausted, and the start of acceptance testing at JPL has been delayed. Any further delays will have to be accommodated through the use of 12 weeks of schedule reserve held by JPL. The cryocooler subcontractor also recently began reporting EVM data based on the latest cost and schedule estimates and, in line with the delays mentioned above, this data already shows that work is costing more and taking longer than planned. JPL’s schedule reserve also has to support any issues that arise during acceptance and end-to- end testing of the cryocooler hardware prior to delivery to the spacecraft integration and test effort. In an effort to reduce this risk, the project reordered the integration and test schedule. This removed some, but not all, of the cryocooler component testing schedule risk, which may limit the project’s ability to address issues that arise during component testing. Specifically, two major spare components of the cryocooler will still be in acceptance testing when spacecraft integration and test begins in April 2016, which is also a risk to the spacecraft integration and test schedule. For example, if a particular cryocooler component fails during one test and a spare component is still undergoing acceptance testing, then the test schedule may be delayed waiting for repairs to be made to the component or for the spare component to be available. Northrop Grumman has made progress on the OTE, but the project expects the contractor to use its current schedule reserve and the OTE is facing risks that may impact the schedule if they are realized. Progress has been made over the past year in fabricating the OTE support structure, which holds the mirrors and ISIM and connects all the pieces of the observatory. Specifically, all of the support structure sections have been completed and fully integrated and the structure has entered cryovacuum testing. The project is tracking an issue with release mechanisms holding the spacecraft and the OTE together while stowed within the launch vehicle and used during the deployment of the telescope after launch. Currently the mechanisms are causing excessive shock vibration when released. According to a NASA official, the project and the contractor are evaluating potential solutions which include changes to the design of the release mechanism, using damping materials to lessen the impact to the spacecraft, and testing to see if the shock requirement can be relaxed. The project has delayed the release mechanism design review until January 2014—after the spacecraft critical design review—while it works to mitigate the issue with contractors. Project officials stated the results of this component level design review will be evaluated prior to a larger mission review to be held later in 2014. In December 2013, the project was also assessing the possibility that portions of the observatory’s sunshield may be delivered up to 3 months late, which could impact the amount of schedule reserve being held by the project. The project indicates that it is considering options by the contractor to recover some of that potential schedule delay. The project has made progress on various portions of the ISIM as well. For example, two of the four instruments have been integrated into the ISIM for testing and fabrication of replacement near infrared detectors used in three of the four instruments—which we reported in 2012 may need to be replaced—is ahead of schedule. Prior schedule conflicts with another NASA project, however, delayed the start of the ISIM integration and test effort and instrument and component delays are further threatening the ISIM integration and test schedule which may lead to additional cost increases. The project has already replanned the ISIM schedule flow due, in part, to delays with the Near-Infrared Camera (NIRCam) and Near-Infrared Spectrograph (NIRSpec) instruments. Specifically, the NIRSpec instrument and NIRCam’s optics were delivered more than a year behind schedule. NIRSpec completed environmental testing and was delivered to Goddard in late September 2013. An electronics component of the NIRCam instrument, however, failed functional testing following a vibration test possibly due to manufacturing defects. The contractor has developed an approach to screen similar components to verify whether those components have similar anomalies. If the components pass the screening process, then environmental testing will continue with a spare in place of the component that malfunctioned. If all of the components show similar anomalies, they will be restricted from vibration tests and used in other testing until replacement components are ready. This issue may impact the already delayed start of the second and third ISIM cryo-vacuum tests, which would further compress the ISIM integration and test schedule or require the project to use some of ISIM’s schedule reserve. Because the ISIM schedule has already been compressed, the project will have less flexibility should any issues or delays arise during this effort. The project is covering the current ISIM- related cost increase—9.8 percent—primarily with funding reserves. Extending the length of time needed to conduct the ISIM integration and test effort, should there be further delays, would require maintaining test personnel and facilities longer than planned, which may lead to further cost increases. Northrop Grumman has successfully addressed the spacecraft mass issue that we reported on in 2012 and project officials state that they are comfortable with the observatory mass margin as the project heads into multiple major integration and test efforts, despite the mass margin being lower than Goddard standards. In December 2012, we reported that the spacecraft was more than 200 kilograms over its mass allocation. In November 2013, Northrop Grumman officials stated that the spacecraft was under its mass allocation at that time. Since December 2011, both the contractor and the project made mass reduction a priority and the contractor currently has margin available to address future issues that may require additional mass to solve. The project’s current overall mass margin is approximately 7.7 percent, which does not include 90 kilograms of additional mass allocation the project received in 2013 from the launch vehicle provider. This is lower than the Goddard standard of 15 percent mass margin at this phase of development. According to project officials, they applied the Goddard standard at the subsystem level rather than at the observatory level due to JWST’s complexity, which allowed them to maintain a lower overall observatory mass margin. They added that the observatory and its component elements have an acceptable amount of mass margin as the project enters its major integration and test efforts and, while they will maintain standard mass controls to avoid unnecessary growth, they do not expect mass margins to be a significant concern going forward. We plan to continue to monitor mass margin in future reviews as the project proceeds through integration and test efforts. Several current near-term funding constraints such as low cost reserves, a higher-than-expected rate of spending, and potential sequestration impacts are putting at risk NASA’s ability to meet its cost and schedule commitments for JWST. In September 2013, project officials reported that while they are making good technical progress, the level of cost reserves held by the project in fiscal year 2014 had become the top issue facing the project and may require them to defer future work. Although not currently identified as an issue by the project, a significant portion of fiscal year 2015 project-held cost reserves have also already been allocated. This does not take into account reserves held by the JWST program at NASA headquarters in fiscal years 2014 and 2015 that can be used to supplement reserves held by the project. However, fiscal year 2014 program reserves are minimal compared to future years. As of September 2013, the project has allocated approximately 60 and 42 percent of its reserves in fiscal years 2014 and 2015, respectively. See figure 8. The need to allocate a significant portion of cost reserves in fiscal year 2014 and 2015 has been driven primarily by the technical issues with the MIRI cryocooler. Specifically, the subcontract modification resulting from the cryocooler replan required the allocation of over $25 million of cost reserves in fiscal year 2014 and 2015. After allocation of these cost reserves, the project began tracking the risk of low fiscal year 2014 cost reserves. Project officials report that the project’s low reserve posture in fiscal year 2014 may require them to defer work to future years. Specifically, because the project continues to maintain 14 months of funded schedule reserve, it may begin using some of that schedule reserve to conduct work later or allow work to take longer than planned. There are risks associated with this approach, however. For example, prior to the project’s replan in 2011, low cost reserves and technical challenges forced project management to defer planned work into future years. This ultimately led to increased costs for the deferred work and a schedule that was unsustainable. Much of the remaining work on JWST involves the five major integration and test efforts—which began in fiscal year 2011— during which work is often sequential in nature and cost and schedule growth typically occurs. Depleting schedule reserve now could impact project officials’ ability to address technical risks or challenges not currently identified or realized, but that will likely arise during this phase. Project officials said that they would like to strike a balance between using remaining cost reserves and having to utilize schedule margin to complete planned work and address currently unknown technical challenges, but their goal is to use as little schedule margin as possible in fiscal year 2014. Northrop Grumman has also identified issues with the adequacy of its cost management reserves in fiscal year 2014. The project shares this concern given that Northrop Grumman’s cost reserves are eroding faster than anticipated. As of October 2012, the contractor held more than $244 million in cost management reserves for the remainder of the contract, but has used almost 24 percent of those management reserves since then. The approximately $185 million in cost management reserves Northrop Grumman has available as of September 2013 represents the total amount of reserves available through the remainder of the contract— almost 6 years—and not how much is available for use specifically for fiscal year 2014. The contract modification for the 2011 replan was signed in December 2013 and, according to the Northrop Grumman program manager, the amount of management reserve available will likely increase by more than $45 million once budget distributions are completed by the end of January 2014. In June 2013, Northrop Grumman had identified up to $80 million in potential risks for fiscal year 2014. Project officials said that Northrop Grumman will sometimes fund new contract requirements for future fiscal years with current year cost reserves. These officials added that they are in the process of determining whether the rate Northrop Grumman is spending cost reserves is a result of additional requirements or because of performance issues. According to JWST project analysts, Northrop Grumman cost management reserves also remain a challenge in fiscal year 2015 when compared to the potential threats. The JWST project manager said that the project could rephase some planned Northrop Grumman cost management reserves from future years to fiscal year 2014 instead, but that would require the project to use some of its fiscal year 2014 cost reserves, which as noted are already constrained. As noted earlier, the JWST Program at NASA headquarters maintains another set of cost reserves that could be used to help in situations such as this, but the bulk of these reserves will not be available until fiscal year 2015. The project’s rate of spending in fiscal year 2013 could also be a significant issue if it continues into fiscal year 2014 and officials have begun tracking the rate of spending as a risk. The project spent approximately $40 million more than planned in fiscal year 2013. According to program officials, the amount of this overage is becoming significant not because of a lack of funds in fiscal year 2013, but because the fiscal year 2014 budget and project cost reserves are constrained. Project officials said that they planned to carry over funding from fiscal year 2013 to support approximately 2½ months of work to help fund contracts and ensure continued operations during a potential continuing resolution or other periods of funding uncertainty. If the project were to receive its full funding allocation for fiscal year 2014 at the level planned, this 2013 money would supplement the money available to the project in 2014. But if the current rate of spending is sustained, the project would only carry over enough 2013 money to fund the project for about 7 to 8 weeks into fiscal year 2014. The lower amount of funding carried over will also cause the project to have less available to supplement shortfalls in future years. For example, the JWST program manager told us that Northrop Grumman has requested more funding in fiscal year 2014 than the amount planned. Program officials noted that if the project continues to spend in fiscal year 2014 at a rate experienced during the latter part of fiscal year 2013, it may not be able to carry any funds into fiscal year 2015 as planned. Project officials, however, indicate that they are confident that they will carryover funds into fiscal year 2015. Our review of the data found that the project’s increased spend rate in fiscal year 2013 is due mainly to additional resources necessary for the ISIM due to late hardware deliveries, the cryocooler effort, and the Northrop Grumman effort to prepare for the spacecraft critical design review in January 2014. NASA’s ability to remedy these issues will likely be significantly hindered by the potential impacts from sequestration and competing demands from other major projects. For example, while NASA officials report that the agency was able to absorb the sequestration-related reductions in fiscal year 2013 with relatively no impact on its major projects, including JWST, they indicate that the agency cannot sustain all of its long-term funding commitments at sequester levels in fiscal year 2014 and beyond. Importantly, the JWST project recently began tracking a risk for the budget uncertainty due to sequestration. The risk outlines that there is a potential cut to the JWST budget starting in fiscal year 2014, which could adversely affect the execution of the project’s current plan and potentially jeopardize the October 2018 launch date. The program office indicates that NASA headquarters directed JWST to plan for its fiscal year 2014 budget to be consistent with the replan. This direction by NASA could have an impact on other major NASA projects. In interviews for several other major NASA projects, officials informed us that they have less than adequate funding in fiscal year 2014 and some have requested that the agency rephase funds from later years to fiscal year 2014 to address the issue. If additional funds are required and prioritized for JWST, there could be a potentially significant impact on these and other projects within the agency that are already reporting funding issues in fiscal year 2014. The reliability of the JWST integrated master schedule is questionable because some of the 23 subordinate schedules synthesized to create it are lacking in one or more characteristics of a reliable schedule. Schedule quality weaknesses in the JWST subsystem schedules transfer to the integrated master schedule. We found a similar result this year consistent with our analysis in 2012 in which weaknesses in the two subsystem schedules we analyzed undermined the reliability of the integrated master schedule. According to scheduling best practices, the success of a program depends in part on having an integrated and reliable master schedule that defines when work will occur, how long it will take, and how each activity is related to the others that come both before and after it. If the schedule is dynamic, planned activities within the schedule will be affected by changes that may occur during a program’s development. For example, if the date of one activity changes, the dates of its related activities will also change in response. The master schedule will be able to identify the consequences of changes and alert managers so they can determine the best response. The government project management office, in this case the JWST project office at Goddard Space Flight Center, is ultimately responsible for the integrated master schedule’s development and maintenance. The quality and reliability of three selected subsystem schedules we examined for this review—ISIM, OTE, and cryocooler—were inconsistent in following the characteristics of high-quality, reliable schedules. Using the 10 best practices for schedules, we individually scored and evaluated the schedules for these subsystems. We then grouped the best practices into one of four characteristics: comprehensive, well-constructed, credible, and controlled. The individual best practice scores within each characteristic were then combined to determine the final score for each characteristic. See appendix III for more detailed information on each characteristic and its corresponding best practices. The ISIM and OTE schedules had more strengths than weaknesses, substantially meeting three of four characteristics of a reliable schedule. The cryocooler schedule demonstrated weaknesses in both of the characteristics we examined. We selected these three subordinate schedules because they represent the significant portion of ongoing work for the project and reflect work by the project, the prime contractor, and a subcontractor. Table 2 identifies the results of each of the selected JWST subordinate schedules and their corresponding best practices sub scores. Of the four characteristics of a reliable schedule that we assessed for the ISIM schedule, we found that three substantially met the criteria— comprehensive, well-constructed, and controlled—while the credible characteristic was partially met. The strengths of the ISIM schedule were that it captured all activities in manageable durations with their proper sequence, identified the longest continuous sequence of activities in the schedule, known as its critical path, and estimated reasonable amounts of total float, defined as the time activities can slip before delaying key delivery dates. NASA also maintains a baseline schedule that is regularly analyzed and updated as progress is made. However, the schedule lacked a schedule risk assessment—a best practice that gives decision makers confidence that the estimates are credible based on known risks and allows management to account for the cost of a schedule slip when developing the life-cycle cost estimate. Without a schedule risk assessment decision makers may not obtain accurate cost impacts when schedule changes occur. Officials noted that while a schedule risk assessment was not performed on the ISIM schedule itself, the schedule was included as a part of the overall JWST JCL analysis, and subsequent cost and schedule estimate, conducted during the project replan in 2011. However, our analysis of the 2011 JCL indicated that the estimate’s accuracy, and therefore the confidence level assigned to the estimate, was reduced by the quality of the summary schedule used for the JCL because it did not provide enough detail to determine how risks were applied to critical project activities. Of the four characteristics of a reliable schedule that we assessed for the OTE schedule, we found that the comprehensive characteristic was fully met, credible and controlled characteristics were substantially met, and the well-constructed characteristic was partially met. The strengths of the OTE schedule were that it captured all activities in manageable durations with their proper sequence, identified the resources needed for each activity, linked activities to the final deliverables the work in the schedule is intended to produce, and accurately reflected dates presented to management in high-level presentations. Northrop Grumman, the creator and manager of the schedule, also maintains a baseline schedule that is regularly analyzed and updated as progress is recorded by schedule experts. However, while Northrop Grumman has identified a critical path, our analysis was not able to confirm that this path described activities in the schedule that were truly driving the key delivery date for the OTE, which is the delivery of the OTE to the OTIS testing and integration at Goddard Space Flight Center on April 28, 2016. Identifying a valid critical path is essential for management to identify and focus upon activities which will potentially have detrimental effects on key project milestones and deliverables if they slip. In addition, we found that one-third of the remaining activities and milestones had over 200 days of total float. This means that, according to the schedule, these activities could be delayed 9 working months without impacting the key delivery date. Realistic float values allow managers to see the impact of a delayed activity on future work. However, unrealistic estimates of float make it difficult to know the amount of time one event can slip without impacting the project finish date. In addition, incorrect float estimates will result in an invalid critical path. Northrop Grumman officials agreed with our assessment but noted the high values of total float are due to their planning process which only details out the schedule in 6 month increments. Activities beyond the detailed planning window of the schedule have high float and those estimates of float will become more reasonable as the schedule is planned in detail. However, best practices state that all activities in the schedule, even far-term planning packages, should be logically linked in such a way as to portray a complete picture of the program’s available float and its critical path. Finally, a schedule risk assessment has not been conducted on the OTE schedule since 2011. Northrop Grumman officials stated that they are not contractually required to periodically conduct a schedule risk assessment. However, as with the ISIM, without a schedule risk assessment decision makers may not have accurate cost impacts when schedule changes occur. Of the two characteristics of a reliable schedule that were assessed for the cryocooler schedule, the well-constructed and credible characteristics were both partially met. The strengths of the cryocooler schedule were that it had a logical sequence of activities with few missing logic links, and few issues with incorrect logic that might impair the ability of the schedule to forecast dates dynamically. Despite these strengths, two of the ultimate goals of a reliable schedule— determining a valid critical path and realistic total float—were only partially achieved. Officials stated that the schedule is used to manage critical paths to six major hardware deliveries, or key delivery dates. However, we could not determine how the schedule is used to identify and present those paths to management. In addition, the use of date constraints in 19 activities within the schedule helps determine the remaining total float to some deliveries, but causes an overabundance of activities to appear as critical, which interferes with the identification of the true project-level critical path. We also found that while the schedule accurately reflected some of the delays the project is currently experiencing, its schedule appears to be overly flexible in some cases, such as having activities with over 500 days—or over 2 working years—of total float. Incorrect float estimates may result in an invalid critical path and an inaccurate assessment of project completion dates. The schedule also lacks a complete and credible schedule risk analysis, without which managers cannot determine the likelihood of the project’s completion date, how much total schedule risk reserve funding is needed, risks most likely to delay the project, or how much reserve funding should be included for each individual risk. Northrop Grumman officials, who manage the schedule and the project, stated that a schedule risk analysis was performed in March 2013, but the results were not used by JPL management who oversees the contract. The results of the schedule risk analysis may help JPL determine the probability of meeting key dates or how much schedule contingency is needed. Officials provided us examples of the schedule risk analysis output, but we were not able to confirm their validity because documentation was not available on the data, risk, or methodologies. In addition to the lack of documentation, because we found the schedule to be only partially well- constructed, we cannot be sure that the results of the schedule risk analysis are valid. Given the weaknesses noted above, if the schedule risk analysis is to be credible, the program must have a quality schedule that reflects reliable logic and clearly identifies the critical path before a schedule risk analysis is conducted. If the schedule does not follow best practices, confidence in the schedule risk analysis results will be lacking. Without the schedule risk analysis, the project office cannot rely on the schedule to provide a high level of confidence in meeting the project’s completion date or identify reserve funding for unplanned problems that may occur. The JWST project has maintained its cost and schedule commitments since its 2011 replan, has continued to make good technical progress, and has implemented and enhanced efforts to improve oversight. Nevertheless, inherent risks continue to make execution of the JWST project challenging and near-term indicators show that the project is currently facing challenges that need to be addressed primarily by increased reserves and progress tracked using the proper tools. Our report, however, indicates that the project may not have the appropriate resources and high fidelity information to ensure execution as planned and provide realistic information to decision makers and other stakeholders. For example, near-term cost reserves are constrained and the project is spending at a higher rate than planned. Without adequate cost reserves in the near-term and if its increased rate of spending continues, the project may need to defer planned work and delay the resolution of future and yet unknown threats. These actions could put the project on a course to repeat past missteps that led to congressional intervention and the institution of a cap on development costs. In addition, the effect sequestration would have on available funding for the project in fiscal year 2014 and beyond is unknown at this point, but could potentially compound this issue. As a result, NASA may need to make difficult decisions about funding JWST adequately at the expense of other, already cash-strapped projects. Importantly, JWST project officials may not have the necessary information to determine the impacts of any resource issues because the project currently lacks a reliable integrated master schedule due to weaknesses we found in several subschedules. Without a reliable schedule, project officials cannot accurately manage and forecast the impacts of changes to the schedule that will likely come about during the integration and testing periods. Despite these concerns, the JWST project has declined to take adequate steps to address our recommendation to perform an updated cost and schedule risk analysis—or JCL—that is based on current risks and a reliable schedule. Unless properly updated to include a reliable schedule that incorporates known risks, particularly if NASA is faced with additional resource constraints through the continuation of sequestration, the cost estimate will not provide decision makers with accurate information to assess the current status of the project. To help ensure that NASA officials are making decisions using up to date and reliable information about the JWST project, Congress should consider requiring the NASA Administrator to direct the JWST project to conduct an updated joint cost and schedule confidence level analysis that is based on a reliable schedule and current risks. We recommend that the NASA Administrator take the following two actions: In order to ensure that the JWST project has sufficient available funding to complete its mission and meet its October 2018 launch date and reduce project risk, ensure the JWST project has adequate cost reserves to meet the development needs in each fiscal year, particularly in fiscal year 2014, and report to Congress on steps it is taking to do so, and In order to help ensure that the JWST program and project management has reliable and accurate information that can convey and forecast the impact of potential issues and manage the impacts of changes to the integrated master schedule, perform a schedule risk analysis on OTE, ISIM, and cryocooler schedules, as well as any other subschedules for which a schedule risk analysis was not performed. In accordance with schedule best practices, the JWST project should ensure that the risk analyses are performed on reliable schedules. NASA provided written comments on a draft of this report. These comments are reprinted in appendix IV. In responding to a draft of this report, NASA concurred with our two recommendations; however, in some cases it is either not clear what actions NASA plans to take or when they will complete the action to satisfy the intent of the recommendations. NASA officials concurred with our recommendation to ensure the JWST project has adequate cost reserves to meet the development needs in each fiscal year, particularly in fiscal year 2014, and report to Congress on steps it is taking to do so. In their response, the Acting JWST Program Director cited NASA and the administration’s request to Congress to appropriate the full JWST replan level funding for fiscal year 2014, which includes the level of unallocated future expenses, or cost reserves, established in the replan. He also commented that NASA conducts monthly reviews to evaluate risks and associated impacts to funding in order to ensure that adequate cost reserves are available in each fiscal year. We acknowledge in our report that the JWST project has been fully funded at levels commensurate with the 2011 baseline through fiscal year 2013. However, cost reserves approved for the project during the 2011 replan were based on the risks known at that time. The events of fiscal year 2013 have weakened the project’s financial posture and flexibility the project has to address any potential technical challenges going forward into fiscal year 2014 and beyond. In addition, NASA’s response does not indicate how the agency plans to report to Congress the steps it is taking to ensure that the JWST project has adequate cost reserves to meet its October 2018 launch date. We maintain that NASA should provide more detail to Congress on its plans given the already constrained cost reserve posture the project has early in fiscal year 2014 and past issues with low levels of cost reserves that forced the project to defer work, which led to significant cost increases and schedule delays. NASA officials concurred with our recommendation to perform a schedule risk analysis on OTE, ISIM, and cryocooler schedules, as well as any other subschedules where a schedule risk analysis was not performed and that, in accordance with schedule best practices, the risk analyses are performed on reliable schedules. The Acting Program Director stated NASA will conduct probability schedule risk analyses on the OTE, ISIM, and cryocooler schedules by the end of calendar 2014 using NASA best practices. This is a positive step, given that our previous work has found that GAO and NASA best practices for scheduling are largely consistent. The Acting Program Director also stated that NASA will conduct the same analyses for other schedules lacking a risk analysis. However, no deadline was mentioned for when these analyses will be accomplished or for how many schedules will be affected. Having reliable schedules sooner will provide management with more timely and accurate information on which to make decisions. If the schedule risk assessments are not completed until after 2014, the project will have less than 4 years until launch to utilize the information these risk analyses can provide. Given that we have found reliability issues with the project’s schedules for the second year, improving the current schedules to meet best practices is important to provide management with improved tools to better understand the schedule risks and manage the project. We are sending copies of the report to NASA’s Administrator and interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to assess (1) the extent to which the James Webb Space Telescope (JWST) project is meeting its cost and schedule commitments and maintaining oversight established as part of the project’s replan, (2) the current major technological challenges facing the JWST project, (3) the extent to which cost risks exist that may threaten the project’s ability to execute the project as planned, and (4) the extent to which the JWST project schedule is reliable based on best practices. In assessing earned value management (EVM) data from several contractors and subcontractors and the project’s schedule estimate, we performed various checks to determine that the data provided was reliable enough for our purposes. To assess the extent to which the JWST project is meeting its cost and schedule commitments and maintaining oversight, we reviewed project and contractor documentation, analyzed the progress made and any variances to milestones established during the project’s replan in 2011, and held interviews with project, contractor, and Defense Contract Management Agency officials. We reviewed project monthly status reviews, documentation on project risks, and budget documentation. We examined and analyzed EVM data from several contractors and subcontractors. The EVM data reviewed included monthly contractor performance reports and analysis performed by the JWST project on this information. For our analysis, we entered only high-level monthly contractor EVM data into a GAO-developed spreadsheet, which includes checks to ensure the EVM data provided was reliable enough for our purposes. We also reviewed the project’s analysis of the estimate at completion for internal work being performed on the Integrated Science Instrument Module. We interviewed program and project officials at NASA headquarters and Goddard Space Flight Center to obtain additional information on the status of the project with regard to progress toward baseline commitments. We periodically attended flight program reviews at NASA headquarters where the current status of the program was briefed to NASA headquarters officials and members of the Standing Review Board. We also interviewed JWST project and contractor officials from the Jet Propulsion Laboratory and Northrop Grumman Aerospace Systems to determine the extent to which oversight was being conducted. In addition, we interviewed officials from the Defense Contract Management Agency to obtain information on oversight activities delegated to it by the JWST project. To assess the technological challenges and risks facing the project, we reviewed project monthly status reviews, information from the project’s risk database, as well as briefings and schedule documentation provided by project and contractor officials. These documents included information on the project’s technological challenges and risks, mitigation plans, and timelines for addressing these risks and challenges. We also interviewed program and project officials for each major observatory system to clarify information and to obtain additional information on system and subsystem level risks and technological challenges for each subsystem. Further, we interviewed officials from the Jet Propulsion Laboratory and Northrop Grumman Aerospace Systems concerning risks and challenges on the subsystems, instruments, or components they were developing. We reviewed GAO’s prior work on NASA Large Scale Acquisitions; the Goddard Space Flight Center Rules for the Design, Development, Verification, and Operation of Flight Systems technical standards; and NASA’s Space Flight Program and Project Management Requirements and Systems Engineering Processes and Requirements policy documents. We compared Goddard standards with data reported by the project to assess the extent to which the JWST project followed NASA policies. To assess the extent to which cost risks exist that may threaten the project’s ability to execute the project as planned, we reviewed project and contractor documentation and held interviews with project and contractor officials. We reviewed project monthly status reviews and NASA headquarters flight program reviews, contractor information on the potential cost to address identified risks, and project analysis of budget- related risks to include the project’s cost reserve posture and the impact of sequestration. We interviewed program and project officials at NASA headquarters and Goddard Space Flight Center as well as officials from the Jet Propulsion Laboratory and Northrop Grumman Aerospace Systems to obtain information on risks to maintaining cost targets and plans to mitigate those risks. To assess the extent to which the JWST project schedule is reliable, we used GAO’s Schedule Assessment Guide to assess characteristics of three selected subordinate schedules—the Integrated Science Instrument Module (ISIM), Optical Telescope Element (OTE), and cryocooler—that are used as inputs to the integrated master schedule. We selected the three schedules above as they reflect a significant portion of the work being conducted within NASA (ISIM), at the contractor level (OTE), and the subcontractor level (cryocooler) during the course of our work. We also analyzed schedule metrics as a part of that analysis to highlight potential areas of strengths and weaknesses against each of our 4 characteristics of a reliable schedule. In order to assess each schedule against the 4 characteristics and their accompanying 10 best practices, we traced and verified underlying support and determined whether the program office or contractor provided sufficient evidence to satisfy the criterion and assigned a score depicting that the practices were not met, minimally met, partially met, substantially met, or fully met. By examining the schedules against our guidance, we conducted a reliability assessment on each of the schedules and incorporated our findings on reliability limitations in the analysis of each subordinate schedule. We also conducted interviews with project and contractor management and schedulers before our analysis was completed and analyzed project and contractor documentation concerning scheduling policies and practices. After conducting our initial analysis, we shared it with the relevant parties to provide an opportunity for them to comment and identify reasons for observed shortfalls in schedule management best practices. We took their comments and any additional information they provided and incorporated it into the assessments to finalize the scores for each characteristic and best practice. We were also able to use the results of the three subordinate schedules to provide insight into the health of the integrated master schedule since the same strengths and weaknesses of the subordinate schedules would transfer to the master schedule. We determined that the schedules were sufficiently reliable for our reporting purposes and our report notes the instances where reliability concerns affect the quality of the schedules. Our work was performed primarily at NASA headquarters in Washington, D.C. and Goddard Space Flight Center in Greenbelt, Maryland. We also visited the Jet Propulsion Laboratory in Pasadena, California; Northrop Grumman Aerospace Systems in Redondo Beach, California; and the Defense Contract Management Agency in Redondo Beach, California. We conducted this performance audit from February 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Shelby S. Oakley, Assistant Director; Karen Richey, Assistant Director; Patrick Breiding; Richard A. Cederholm; Laura Greifner; Keith Hornbacher; David T. Hulett; Jason Lee; Sylvia Schatz; Ryan Stott; and Roxanna T. Sun made key contributions to this report.
|
JWST is one of NASA's most complex and costly science projects. Effective execution of the project is critical given the potential effect further cost increases could have on NASA's science portfolio. The project was rebaselined in 2011 with a 78 percent life-cycle cost estimate increase --now $8.8 billion--and a launch delay of 52 months--now October 2018. GAO has made a number of prior recommendations, including that the project perform an updated cost and schedule risk analysis to improve cost estimates. GAO was mandated to assess the program annually and report on its progress. This is the second such report. This report assesses the (1) extent to which the JWST project is meeting its cost and schedule commitments and maintaining oversight, (2) current major technological challenges facing the project, (3) extent to which cost risks exist that may threaten the project's ability to execute as planned, and (4) extent to which the JWST project schedule is reliable based on scheduling best practices. GAO reviewed relevant NASA and contractor documents, interviewed NASA and contractor officials, and compared the project schedule with best practices criteria. The James Webb Space Telescope (JWST) project is generally executing to its September 2011 revised cost and schedule baseline; however, several challenges remain that could affect continued progress. The National Aeronautics and Space Administration (NASA) has requested funding that is in line with the rebaseline and the project is maintaining 14 months of schedule reserve prior to its launch date. Performance data from the prime contractor indicate that generally work is being accomplished on schedule and at the cost expected; however, monthly performance declined in fiscal year 2013. Project officials have maintained and enhanced project oversight by, for example, continuing quarterly NASA and contractor management meetings and instituting a tool to update cost estimates for internal efforts. Program officials, however, are not planning to perform an updated integrated cost/schedule risk analysis, as GAO recommended in 2012, stating that the project performs monthly integrated risk analyses they believe are adequate. Updating the more comprehensive analysis with a more refined schedule and current risks, however, would provide management and stakeholders with better information to gauge progress. The JWST project has made progress addressing some technical challenges that GAO reported in 2012, such as inadequate spacecraft mass margin, but others have persisted, causing subsystem development delays and cost increases. For example, the development and delivery schedule of the cryocooler--which cools one instrument--was deemed unattainable by the subcontractor due to technical issues and its contract was modified in August 2013 for the second time in less than 2 years, leading to a cumulative 120 percent increase in contract costs. While recent modifications have been made, execution of the cryocooler remains a concern given that technical performance and schedule issues persist. Overall the project is maintaining a significant amount of cost reserves; however, low levels of near-term cost reserves could limit its ability to continue to meet future cost and schedule commitments. Development challenges have required the project to allocate a significant portion of cost reserves in fiscal year 2014. Adequate cost reserves for the prime contractor are also a concern in fiscal years 2014 and 2015 given the rate at which these cost reserves are being used. Limited reserves could require work to be extended or work to address project risks to be deferred--a contributing factor to the project's prior performance issues. Potential sequestration and funding challenges on other major NASA projects could limit the project's ability to address near-term challenges. GAO's analysis of three subsystem schedules determined that the reliability of the project's integrated master schedule--which is dependent on the reliability of JWST's subsystem schedules--is questionable. GAO's analysis found that the Optical Telescope Element (OTE) schedule was unreliable because it could not adequately identify a critical path--the earliest completion date or minimum duration it will take to complete all project activities, which informs officials of the effects that a slip of one activity may have on other activities. In addition, reliable schedule risk analyses of the OTE, the cryocooler, or the Integrated Science Instrument Module schedules were not performed. A schedule risk analysis is a best practice that gives confidence that estimates are credible based on known risks so the schedule can be relied upon to track progress. Congress should consider directing NASA to perform an updated integrated cost/schedule risk analysis. GAO recommends that NASA address issues related to low cost reserves and perform schedule risk analyses on the three subsystem schedules GAO reviewed. NASA concurred with GAO's recommendations.
|
In 2008, we reported that FPS does not use a comprehensive risk management approach that links threats and vulnerabilities to resource requirements. Without a risk management approach that identifies threats and vulnerabilities and the resources required to achieve FPS’s security goals, there is only limited assurance that programs will be prioritized and resources will be allocated to address existing and potential security threats in an efficient and effective manner. FPS uses a facility-by-facility approach to risk management. Under this approach, FPS assumes that all facilities with the same security level have the same risk regardless of their location. For example, a level IV facility in a metropolitan area is generally treated the same as one in a rural area. We also reported in 2008 that FPS’s approach does not include a process for examining comprehensive risk across the entire portfolio of GSA’s facilities. Both our and DHS’s risk management frameworks include processes for assessing comprehensive risk across assets in order to prioritize countermeasures based on the overall needs of the system. FPS’s building-by-building approach, however, prevents it from comprehensively identifying and prioritizing vulnerabilities and making countermeasure recommendations at a strategic level. Over the years we have advocated the use of a risk management approach that links threats and vulnerabilities to resource requirements and allocation. A risk management approach entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, allocating resources based on risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. Risk assessment, an important element of a risk management approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. In response to our recommendations in this area, FPS began developing a new system referred to as the Risk Assessment Management Program (RAMP). This system is designed to be a central database for capturing and managing facility security information, including the risks posed to federal facilities and the countermeasures in place to mitigate risk. FPS also anticipates that RAMP will allow inspectors to obtain information from one electronic source, generate reports automatically, enable FPS to track selected countermeasures throughout their life cycle, address some concerns about the subjectivity inherent in facility security assessments (FSA), and reduce the amount of time inspectors and managers spend on administrative work. FPS designed RAMP so that it will produce risk assessments that are compliant with Interagency Security Committee (ISC) standards, which, among other things, require risk assessment methodologies to be credible, reproducible, and defensible and for FSAs to be done every 3 to 5 years. According to FPS, RAMP is also compatible with the risk management framework set forth by the National Infrastructure Protection Plan (NIPP), and consistent with the business processes outlined in the memorandum of agreement with GSA. According to FPS, RAMP will support all components of the FSA process, including gathering and reviewing building information; conducting and recording interviews; assessing threats, vulnerabilities, and consequences to develop a detailed risk profile; recommending appropriate countermeasures; and producing FSA reports. FPS also plans to use RAMP to track and analyze certain workforce data, contract guard program data, and other performance data, such as the types and definitions of incidents and incident response times. Currently, FPS is in the process of implementing the first phase of RAMP and plans to have it fully implemented by the end of 2011. We are reviewing the design and implementation of RAMP and will provide Congress with a final report next year. We reported in July 2009 and April 2010 that FPS faces challenges in ensuring that many of the 15,000 contract security guards that FPS relies on to help protect federal facilities have the required training and certification to be deployed at federal facilities. We also identified substantial security vulnerabilities related to FPS’s guard program. Each time they tried, in April and May 2009, our investigators successfully passed undetected through security checkpoints monitored by FPS’s guards, with the components for an improvised explosive device concealed on their persons at 10 level IV facilities in four major metropolitan areas. FPS also took a number of immediate actions to address concerns raised about contract guard management in our July 2009 contract guard report. For example, since July 2009, FPS has increased its penetration tests in some regions and the number of guard inspections it conducts at federal facilities in some metropolitan areas. FPS currently requires its inspectors to complete two guard inspections a week at level IV facilities. Prior to this new requirement, FPS did not have a national requirement for guard inspections, and each region we visited had requirements ranging from no inspections to five inspections per month per FPS inspector. FPS is also in the process of providing additional X-ray and magnetometer training in response to our July 2009 testimony. FPS anticipates that guards will be fully trained by the end of 2010. Under FPS’s revised training program, inspectors must receive 30 hours of X-ray and magnetometer training and guards must receive 16 hours. Prior to this revision, guards needed 8 hours of training on X-ray and magnetometer machines. However, despite these changes, we remain concerned about FPS’s oversight of the contract guard program and made recommendations for additional improvements in our April 2010 report. For example, we reported that despite FPS’s recent actions, guards were continuing to neglect or inadequately perform their assigned responsibilities. We also remained concerned that FPS had not acted diligently in ensuring the terms of its guard contract and taken enforcement action when noncompliance occurred. Thus, we recommended, among other things, that FPS identify other approaches that would be cost-beneficial for protecting federal facilities. FPS agreed with this recommendation but has not yet implemented it. We have reported on several issues related to locating FPS within DHS’s Immigration and Customs Enforcement (ICE). For example, we reported in 2008 that some of FPS’s operational and funding challenges stemmed from it being part of ICE. In October 2009, to enable FPS to better focus on its primary facility protection mission, the Secretary of Homeland Security transferred FPS from ICE to the National Protection and Programs Directorate (NPPD). According to DHS, transferring FPS to NPPD will enhance oversight and efficiency while maximizing the department’s overall effectiveness in protecting federal buildings across the country. We are reviewing the transition of FPS into NPPD and will provide Congress with a final report in 2011. FPS has yet to fully ensure that its recent move to an inspector-based workforce does not hinder its ability to protect federal facilities. In 2007, FPS essentially eliminated its police officer position and moved to an all- inspector-based workforce. FPS also decided to place more emphasis on physical security activities, such as completing FSAs, and less emphasis on law enforcement activities, such as proactive patrol. We reported in 2008 that these changes may have contributed to diminished security and increases in inspectors’ workload. Specifically, we found that when FPS is not providing proactive patrol at some federal facilities, there is an increased potential for illegal entry and other criminal activity. For example, in one city we visited, a deceased individual had been found in a vacant GSA facility that was not regularly patrolled by FPS. Under its inspector-based workforce approach, FPS will rely more on local police departments to handle crime and protection issues at federal facilities. However, at about 400 federal facilities across the United States, the federal government has exclusive jurisdiction, and it is unclear if local police have the authority to respond to incidents inside those facilities. Additionally, FPS has not entered into any memorandums of agreement for increased law enforcement assistance at federal facilities. In most of the cities we visited, local law enforcement officials said they would not enter into any agreements with FPS that involve increased responsibility for protecting federal facilities because of liability concerns, existing shortages of staff, and the need to respond to crime in their cities that would make it difficult to divert resources from their primary mission. For example, local law enforcement officials from one location we visited said they are significantly understaffed and overburdened with their current mission and would not be able to take responsibility for protecting federal facilities. We believe that it is important that FPS ensure that its decision to move to an inspector-based workforce does not hamper its ability to protect federal facilities. We recommended in 2008 that FPS clarify roles and responsibilities of local law enforcement agencies in responding to incidents at GSA facilities. While FPS agreed with this recommendation, FPS has decided not to pursue agreements with local law enforcement officials, in part because of reluctance on the part of local law enforcement officials to sign such agreements. In addition, FPS believes that the agreements are not necessary because 96 percent of the properties in its inventory are listed as concurrent jurisdiction facilities where both federal and state governments have jurisdiction over the property. Nevertheless, we continue to believe that these agreements would, among other things, clarify roles and responsibilities of local law enforcement agencies when responding to crime or other incidents. While FPS has recently increased the size of its workforce as mandated by Congress, we reported in our 2009 report that FPS has operated without a human capital plan. We recommended that FPS develop a human capital plan to guide its current and future workforce planning efforts. We have identified human capital management as a high-risk issue throughout the federal government, including within DHS. Without a long-term strategy for managing its current and future workforce needs, including effective processes for hiring, training, and staff development, FPS will be challenged to align its personnel with its programmatic goals. FPS concurred with this recommendation and has drafted a workforce analysis plan but has not yet fully developed or implemented a human capital plan. FPS’s primary means of funding its operations—the basic security fee charged to some federal agencies—does not account for a building’s level of risk, the level of service provided, or the cost of providing those services. We reported in 2008 that this issue raises questions about whether some federal agencies are being overcharged by FPS. FPS also does not have a detailed understanding of its operational costs, including accurate information about the cost of providing its security services at federal facilities with different risk levels. Without this type of information, FPS has difficulty justifying the rate of the basic security fee to its customers. We have found that by having accurate cost information, an organization can demonstrate its cost-effectiveness and productivity to stakeholders, link levels of performance with budget expenditures, provide baseline and trend data for stakeholders to compare performance, and provide a basis for focusing an organization’s efforts and resources to improve its performance. In addition, FPS’s fee-based funding system has not always generated sufficient revenue to cover its operational costs. In 2007 we reported that FPS’s collections fell short of covering its projected operational costs, and the steps it took to address the projected shortfalls reduced staff morale, increased attrition rates, and diminished security at some GSA facilities. FPS has yet to evaluate whether its fee-based system or an alternative funding mechanism is most appropriate for funding the agency as we recommended in our 2008 report. FPS agreed with our recommendation and has taken some action, including the development and implementation of an Activity Based Cost framework. We are assessing FPS’s efforts in this area as part of our ongoing review of FPS’s fee-base structure and will provide Congress with a final report in 2011. We have reported that FPS is limited in its ability to assess the effectiveness of its efforts to protect federal facilities. To determine how well it is accomplishing its mission to protect federal facilities, FPS has identified some output measures. These measures include determining whether security countermeasures have been deployed and are fully operational, the amount of time it takes to respond to an incident, and the percentage of FSAs completed on time. While output measures are helpful, outcome measures are also important because they can provide FPS with broader information on program results, such as the extent to which its decision to move to an inspector-based workforce will enhance security at federal facilities or help identify the security gaps that remain at federal facilities and determine what action may be needed to address them. In addition, FPS does not have a reliable data management system that will allow it to accurately track these measures or other important measures such as the number of crimes and other incidents occurring at GSA facilities. Without such a system, it is difficult for FPS to evaluate and improve the effectiveness of its efforts to protect federal employees and facilities, allocate its limited resources, or make informed risk management decisions. For example, weaknesses in one of FPS’s countermeasure tracking systems make it difficult to accurately track the implementation status of recommended countermeasures such as security cameras and X-ray machines. Without this ability, FPS has difficulty determining whether it has mitigated the risk of federal facilities to crime or a terrorist attack. FPS concurred with our recommendations and states that its efforts to address them will be completed in 2012 when its automated information systems are fully implemented. FPS’s ability to protect federal facilities under the control or custody of GSA is further complicated by the FSC structure. The Department of Justice’s 1995 Vulnerability Assessment of Federal Facilities guidelines directed GSA to establish a FSC in each federal facility under its control. FSCs have experienced several issues that may have increased the risk at some federal facilities. For example, FSCs have operated since 1995 without guidelines, policies, or procedures that outline how they should operate, make decisions, or establish accountability. This results in ad hoc security that undermines effective protection of individual facilities as well as the entire facilities’ portfolio. Each FSC consists of a representative from each of the tenant agencies in the facility and is responsible for addressing security issues at its respective facility and approving the implementation of security countermeasures recommended by FPS. After completing its FSAs, FPS makes recommendations to GSA and tenant agencies for building security countermeasures. For example, tenant agencies decide whether to fund countermeasures for security equipment, and FPS is responsible for acquiring, installing, and maintaining approved security equipment. However, we reported in November 2009 that the tenant agency representatives generally do not have any security knowledge or experience but are expected to make security decisions for their respective agencies. We also reported that some of the FSC tenant agency representatives also do not have the authority to commit their respective organizations to fund security countermeasures. Thus, when funding for security countermeasures is needed, each federal tenant agency representative that does not have funding authority must obtain approval from his or her headquarters office. According to some FSC members, in some instances funding for security countermeasures is not always available because the request for funding is generally made after the budget is formulated. In addition, while FPS, GSA, and tenant agencies are responsible for some aspects of protecting federal facilities, it is unclear who is the final arbiter or accountable for final decisions. We reported in November 2009 that the FSC structure may not contribute to effective protection of federal facilities for several reasons. Some FSC members may not have the security expertise needed to make risk-based decisions. They may find the associated costs prohibitive. Tenant agencies may lack complete understanding of why recommended countermeasures are necessary because they do not receive an adequate amount of information from FPS. Moreover, we found some instances in 2008 and 2009 where the FSC structure contributed to increased risk at some federal facilities. For example, an FPS official in a major metropolitan area stated that over the last 4 years inspectors have recommended 24-hour coverage at one high- risk facility located in a high-crime area multiple times; however, the FSC was not able to obtain approval from all its members. In addition, several FPS inspectors stated that their regional managers have instructed them not to recommend security countermeasures in FSAs if FPS would be responsible for funding the measures because there is not sufficient funding in regional budgets to purchase and maintain the security equipment. Moreover, at a different location, members of a FSC told us that they met as needed, although even when they hold meetings, one of the main tenant agencies typically does not participate. GSA officials commented that this tenant adheres to its agency’s building security protocols and does not necessarily follow GSA’s tenant policies and procedures, which GSA thinks creates security risks for the entire building. ISC recently began to develop guidance for FSC operations, which may address some of these issues. The committee, however, has yet to announce an anticipated date for issuance of this guidance. In response to our many recommendations, FPS has a number of ongoing improvements that, once fully implemented, should enhance its ability to protect the over 1 million federal government employees and members of the public who visit federal facilities each year. In addition, FSCs have a significant role in ensuring the effective protection of federal facilities; however, they face a number of issues in carrying out their security responsibilities. For example, they have operated without any procedures since their creation in 1995, and efforts to develop guidance are incomplete. Without specific guidance or procedures, FSCs have operated in an ad hoc manner, and there is a lack of assurance that federal facilities under the control and custody of GSA are effectively protected by FPS. Moreover, no actions have been taken on these issues since we identified them in our November 2009 report. As such, these weaknesses continue to result in ad hoc security and increased risk at some federal facilities. Therefore, we are making a recommendation for the Secretary of DHS to address this matter. GAO recommends that the Secretary of DHS direct the Under Secretary of NPPD and the Director of FPS to work in consultation with GSA and ISC to develop and implement procedures that, among other things, outline the FSCs’ organizational structure, operations, decision-making authority, and accountability. We provided a draft of this report to DHS for review and comment. DHS concurred with the recommendation in this report. Regarding the status of our recommendations listed in appendix I, FPS commented that it is actively pursuing initiatives and implementing measures to address the nine recommendations that we reported as not implemented. We believe our characterization of FPS’s efforts to address our recommendations reflects the data provided by FPS. We are also concerned that the steps FPS described in its documents are not comprehensive enough to address the recommendations that we reported as not implemented. For example, regarding our recommendation to identify other approaches and options that would be most beneficial and financially feasible for protecting federal facilities, FPS states that it most recently coordinated with DHS’s Science and Technology Directorate to better define requirements for the next generation of security technology. However, we continue to believe that given the challenges FPS faces with managing its contract guard program, among other things, FPS needs to undertake a comprehensive review of how it protects federal facilities. FPS has not provided us with this type of analysis or information. We are also concerned about the reliability of the preliminary data FPS used to evaluate whether its fee- based system or an alternative funding mechanism is appropriate to fund the agency. We are currently reviewing the reliability of FPS’s Activity Based Costing framework and will reassess FPS’s efforts to address this recommendation at the end of our review. DHS’s comments are presented in appendix II. DHS also provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http//www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact name above, Tammy Conquest, Assistant Director; Jennifer Clayborne; Delwen Jones; and Susan Michal-Smith made key contributions to this report.
|
To accomplish its mission of protecting about 9,000 federal facilities, the Federal Protective Service (FPS) currently has a budget of about $1 billion, about 1,225 full-time employees, and about 15,000 contract security guards. However, protecting federal facilities and their occupants from a potential terrorist attack or other acts of violence remains a daunting challenge for the Department of Homeland Security's (DHS) Federal Protective Service. GAO has issued numerous reports on FPS's efforts to protect the General Services Administration's (GSA) facilities. This report (1) recaps the major challenges we reported that FPS faces in protecting federal facilities and discusses FPS's efforts to address them and (2) identifies an additional challenge that FPS faces related to the facility security committees (FSC), which are responsible for addressing security issues at federal facilities. This report is based primarily on our previous work and recent FPS interviews. Since 2007, we have reported that FPS faces significant challenges with protecting federal facilities, and in response FPS has recently started to take steps to address some of them. In 2008, we reported that FPS does not use a risk management approach that links threats and vulnerabilities to resource requirements. Without a risk management approach that identifies threats and vulnerabilities and the resources required to achieve FPS's security goals, there is limited assurance that programs will be prioritized and resources will be allocated to address existing and potential security threats in an efficient and effective manner. FPS recently began implementing a new system referred to as the Risk Assessment Management Program (RAMP). This system is designed to be a central database for capturing and managing facility security information, including the risks posed to federal facilities and the countermeasures that are in place to mitigate risk. FPS expects that RAMP will enhance its approach to assessing risk, managing human capital, and measuring performance. Our July 2009 report on FPS's contract guard program also identified a number of challenges that the agency faces in managing its contract guard program, including ensuring that the 15,000 guards that are responsible for helping to protect federal facilities have the required training and certification to be deployed at a federal facility. In response to our report, FPS took a number of immediate actions with respect to contract guard management. For example, FPS has increased the number of guard inspections it conducts at federal facilities in some metropolitan areas and revised its guard training. We have not reviewed whether these actions are sufficient to fulfill our recommendations. Another area of continuing concern is that FPS continues to operate without a human capital plan and does not have an accurate estimate of its current and future workforce needs. In our July 2009 report, we recommended that FPS develop a human capital plan to guide its current and future workforce planning efforts. While FPS agreed with this recommendation, it has not yet fully developed or implemented a human capital plan. As we reported in 2009, FPS's ability to protect GSA facilities is further complicated by the FSC structure. Each FSC includes FPS, GSA, and a tenant agency representative and is responsible for addressing security issues at its respective facility and approving the funding and implementation of security countermeasures recommended by FPS. However, there are several weaknesses with the FSC. First, FSCs have operated since 1995 without procedures that outline how they should operate or make decisions, or that establish accountability. Second, the tenant agency representatives to the FSC generally do not have any security knowledge or experience but are expected to make security decisions for their respective agencies. Third, many of the FSC tenant agency representatives also do not have the authority to commit their respective organizations to fund security countermeasures. No actions have been taken on these issues since our 2009 report, and thus these weaknesses continue to result in ad hoc security and increased risk at some federal facilities. GAO recommends that the Secretary of DHS direct the Director of FPS to work in consultation with other representatives of the FSC to develop and implement procedures that, among other things, outline the committees' organization structure, operations, and accountability. DHS concurred with GAO's recommendation.
|
Conducting research is one of VA’s core missions. VA researchers have been involved in a variety of important advances in medical research, including development of the cardiac pacemaker, kidney transplant technology, prosthetic devices, and drug treatments for high blood pressure and schizophrenia. For fiscal year 2000, Congress appropriated $321 million for VA’s research programs, which support a wide range of human, animal, and basic science studies. VA uses a competitive funding process in which its Office of Research and Development (ORD) allocates about $296 million of these funds to VA researchers, with awards based on scientific merit and potential contribution to knowledge of issues of particular concern to VA. VA allocates most of the remainder to indirect costs of research, which includes support for the human subjects protection system. Besides the appropriation for research, VA allocates funds from its medical care appropriation to support the research infrastructure at medical centers such as laboratory facilities and investigator salaries. In fiscal year 2000, this allocation amounted to $343 million. VA researchers receive additional grants and contracts from other federal agencies such as the National Institutes of Health (NIH), research foundations, and private industry sponsors, including pharmaceutical companies. In fiscal year 1999, these additional funds amounted to approximately $481 million. Nonprofit research foundations linked to VA medical centers control some of these non-VA research funds.In fiscal year 2000, biomedical or behavioral research involving human subjects is being conducted at about 70 percent of VA medical centers. VA is responsible for ensuring that all human research it conducts or supports meets the requirements of VA regulations, regardless of whether that research is funded by VA, the subjects are veterans, or the studies are conducted on VA grounds. Responsibility for administration and oversight of the research program has rested primarily with ORD. Recently, VA created the Office of Research Compliance and Assurance (ORCA), which has been charged with advising the Under Secretary for Health on all matters affecting the integrity of research protections for humans and animals, promoting the ethical conduct of research, and investigating allegations of research improprieties. Some VA research is also subject to oversight by two HHS components. The Food and Drug Administration (FDA) is responsible for protecting the rights of human subjects enrolled in research with products it regulates— drugs, medical devices, biologics, foods, and cosmetics. Research that involves human subjects and is funded by HHS is subject to oversight by its Office for Human Research Protections (OHRP).HHS requires institutions conducting human research with HHS funds to file a document with OHRP that indicates a commitment to comply with federal regulations. This document, called an assurance, may cover a single study (a single project assurance), or it may allow the institution to conduct multiple studies (a multiple project assurance).When an institution files a multiple project assurance with OHRP, all federally funded research involving human subjects at that institution must comply with HHS regulations. Both FDA and OHRP have the authority to monitor those studies conducted under their jurisdiction, and each can take action against investigators, IRBs, or institutions that fail to comply with applicable regulations. Research with human subjects conducted at VA facilities is governed by regulations designed to protect their rights and welfare. These regulations establish minimum standards for the conduct and review of research to ensure that research involving human subjects is conducted in accordance with the three ethical principles outlined by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.First, the principle of respect for persons requires acknowledgement of individual autonomy, and conversely, the need to protect those with diminished autonomy. In practice, this principle requires that subjects give informed consent to participate in research; that is, they must be given sufficient information about a study including its purpose, procedures, to decide whether to participate. They must also understand this information, and their consent must be voluntary. Second, the principle of beneficence requires that the expected benefits of research to the individual or to society outweigh its anticipated risks. Third, the principle of justice requires fair subject selection procedures, so that both the benefits and the burdens of research are distributed across a number of individuals in a just manner. In 1981, in response to the National Commission, both HHS and FDA promulgated revised regulations for the protection of human subjects. Seventeen federal departments and agencies, including HHS and VA, have adopted the core of HHS regulations.FDA’s regulations are slightly different from those adopted by HHS and VA. To safeguard the rights of subjects and promote ethical research, these federal regulations create a system in which the responsibility for the protection of human subjects is assigned to three groups. Investigators are responsible for conducting their research in accordance with applicable federal regulations and for ensuring that legally effective consent is obtained from each subject or his or her legally authorized representative. Institutions are responsible for establishing oversight mechanisms for research including establishing local committees known as institutional review boards (IRB), which are responsible for reviewing research proposals before studies are initiated and after they are under way to help ensure that research is conducted in accordance with the three principles described above. Agencies, including VA, are responsible for ensuring that their IRBs comply with applicable federal regulations and that they have sufficient space and staff to accomplish their obligations. VA requires each of its medical centers that engages in research with human subjects to establish its own IRBor secure the services of an IRB at an affiliated university. As of August 2000, approximately 40 percent of the medical centers conducting research with human subjects relied on an IRB at an affiliated university. The IRB sends its recommendations to the VA medical center’s research and development committee, which is responsible for maintaining standards of scientific quality, laboratory safety, and the safety of human and animal subjects. The research and development committee is charged with reviewing each study’s budget; assessing the availability of needed space, personnel, equipment, and supplies; and determining the effect of the planned research on the investigator’s other responsibilities, including the provision of clinical services. The committee can disapprove a study; however, VA regulations prevent the research and development committee (or any other institutional official or body) from overturning an IRB decision to disapprove a study. A VA investigator who wants to conduct research with human subjects must develop a research plan (called a protocol), supporting documents, and a consent form. The consent form is designed to provide potential subjects with sufficient information about the study, including its procedures, risks, and benefits, to allow the subject to make an informed decision about whether to participate in the study (see fig. 1). The investigator then submits these materials for review. The study is not to be initiated until both the IRB and the research and development committee have approved it, and these committees may insist on changes to the protocol or consent form. Once approval has been given, VA regulations prohibit any unapproved changes to the study’s procedures, unless doing so is absolutely necessary to ensure the safety of a subject. If an investigator wants to alter some aspect of the study, then the IRB must review and approve an amendment or modification to the protocol. In a process known as continuing review, each study is to be re-reviewed at least once per year, and more frequently if the degree of risk warrants it. We found variation across medical centers and their affiliated universities in the implementation of VA regulations and policies involving protections for human subjects. At the eight sites we visited, we found noncompliance with VA regulations in four areas: (1) informed consent; (2) IRB review; (3) IRB membership, staff, and space; and (4) IRB documentation. The problems we identified are similar to problems that OHRP noted in letters to universities and hospitals it has found to be out of compliance with federal regulations. As shown in fig. 2, some sites we visited had more problems than did others. Of the sites we visited, those with the most extensive violations of VA regulations relied on VA-run IRBs. We identified fewer problems at the IRBs in our sample that were run by universities. In particular, we observed fewer problems with IRB membership, staff, space, and documentation at university-run IRBs than at VA-run IRBs. University-run IRBs were also more likely to conduct thorough and timely continuing reviews than VA-run IRBs. University-run IRBs we visited were not without problems, however. We found that some IRB-approved consent forms at each site omitted required information and some investigators used nonapproved consent forms. We found problems with the content or use of informed consent forms at all of the medical centers we visited. We found that some informed consent documents that had been approved for use by IRBs provided incomplete or unclear information. In addition, we found some studies in which the investigators used nonapproved consent forms when enrolling subjects. We also found one instance in which research was conducted without consent. Informed consent is a primary ethical requirement of research with human subjects and reflects the principle of respect for persons. The ability of competent subjects to make informed decisions about whether to participate in research and the ability of legally authorized representatives to protect those who are unable to provide consent because they are incapacitated are undermined when IRBs fail to ensure that all required information is included in consent forms or when investigators fail to obtain consent using approved procedures. We found that 60 percent of the 138 IRB-approved consent forms that we randomly sampled from lists of active projects provided incomplete or unclear information about required elements of informed consent. (Fig. 3 lists the elements of informed consent required by VA regulations.) Each IRB we visited approved some consent forms that contained incomplete information. For example, IRB-approved consent forms did not indicate that blood would be drawn in a study on the effects of exposure mention possible risks of a biopsy in a study designed to test a treatment describe alternative treatment options in a study comparing two drug treatments for schizophrenia, and indicate who would have access to data obtained during a study on treatment for cirrhosis of the liver. Of the 84 IRB-approved consent forms we identified that omitted required elements or provided incomplete information, almost half did so for two or more required elements. For example, the consent form for a study of treatments to reduce the recurrence of melanoma did not provide clear information about the duration of the study, nor did it state whom to contact for information about research subjects’ rights. Participants were also told that data would continue to be obtained from their medical records even if they withdrew from the study. Thus, the consent document for this study provided incomplete information about two required elements and appeared to negate the subject’s right to withdraw from the study at any time. Moreover, this consent form might have created undue influence because it inappropriately suggested that the subjects’ own physician endorsed the potential benefits to the subject of participating in this study. Because the participants in this study are randomly assigned to receive either an unproven treatment or no treatment, the physician would have no way of knowing whether participation would benefit the subject. VA regulations allow an IRB to approve a consent procedure that alters or omits one or more of the required elements of consent if it finds and documents certain conditions. We were unable to find such documentation in the cases we reviewed. Moreover, 37 of the IRB-approved consent forms that omitted or provided incomplete information about a required element were for studies that involved investigational drugs or devices. Thus, both VA and FDA regulations had to be met, and when informed consent is required, FDA regulations do not permit IRBs to alter or omit any required elements of informed consent. The information that was omitted most frequently—in about 15 percent of forms was the person to whom subjects should direct questions about their rights as research subjects. This information, which is required by regulations, is not included in the standard template for informed consent that VA policy requires investigators to use. Sites varied in the number of IRB-approved forms that provided incomplete information and the number of incomplete or absent elements in approved forms. The percent of approved consent forms with incomplete information ranged from 78 to 100 percent of our sample at the four sites with the greatest number of these problems. Moreover, forms from these four sites often provided incomplete descriptions of two or more required elements of informed consent. As many as four elements of informed consent were missing or incomplete in IRB-approved forms at these sites. At the two sites where we found the fewest problems, about three-fourths of our sample of approved consent forms were problem-free, and multiple problems in the same form were rare. In addition to information required by VA regulations, VA policy also requires that informed consent forms indicate that VA will provide free medical treatment for research-related injuries.We found that about 30 percent of the IRB-approved consent documents we reviewed did not include this statement. The absence of this statement varied by site. (These data are not included in fig. 2, which presents noncompliance with VA regulations.) The majority of forms we sampled at two university-run IRBs did not include this information, and one VA-run IRB included it only about half the time. In contrast, the forms at the other university-run IRBs and at the four other VA-run IRBs almost always included it. The requirement for informed consent was waived for eight of the projects we reviewed, and in each case, our review indicated that the study qualified for the waiver. According to VA regulations, certain categories of research directly or indirectly, to specific individuals do not require informed for example, studies of existing data that cannot be linked, consent or IRB approval. VA regulations also allow for a waiver of informed consent in some research that is not eligible for an exemption from IRB review, provided that the IRB determines that certain conditions apply. Although all the consent forms we obtained from investigators indicated that consent to participate in research had been obtained, we found that investigators did not always obtain consent appropriately. In this review of consent forms, we found 18 studies in which the investigators used nonapproved consent forms when enrolling subjects. We also separately identified one instance in which research was conducted without consent. We asked investigators at each site to show us signed consent forms from a randomly selected sample of their subjects. We examined 540 such consent forms, all of which had the signature of a subject or a surrogate. In addition to determining that investigators were able to produce these signed consent forms, at four sites we also compared these signed forms with consent forms that IRBs had approved for use in these studies. We found that investigators had used nonapproved consent forms with one or more subjects in 18 of the 73 studies we examined. A total of 33 of 292 subjects had signed nonapproved consent forms. The extent of this problem varied by site. We found that one or more subjects had signed a nonapproved form in 12 to 33 percent of the studies we examined at these four sites. Some of the nonapproved forms that were signed by subjects omitted key information that had been included in the IRB-approved version of the consent form. For example, the nonapproved form that had been signed by all four subjects enrolled in a study on treatments for lymphoma did not mention that the study would involve multiple bone marrow biopsies, the possible risks of those biopsies, or possible side effects of two drugs used information that was included in the IRB-approved consent form. We identified one instance in which research procedures were performed without consent in the projects in our sample. In this instance, a patient who had not given consent was subjected to an esophageal biopsy for research purposes. This biopsy, which was not reported to the IRB, occurred in conjunction with a biopsy performed for diagnostic purposes in November 1997. We also found that investigators or their staff had not fully complied with requirements for obtaining consent in three other studies in our sample. In each of these, subjects had consented and steps were implemented to address the problem. In October 1998, an investigator learned that a subject with schizophrenia did not understand his right to withdraw from research at any time. Upon discovering this, the investigator fired the person who had obtained the subject’s consent, withdrew the subject from the study, and reported the incident to the IRB. In May 1997, FDA discovered that the consent form signed by subjects in a study of an investigational device to facilitate walking among paraplegics had not included all the necessary information about their participation. The problem was reported to the IRB, the consent form was rewritten, and three previously enrolled subjects were given a revised form and a chance to withdraw from participation. In July 1997, an investigator realized that he did not have IRB approval for the protocol and consent form that had been used for 73 subjects care providers who had completed a questionnaire to assess decision- including schizophrenics, their family members, and health making. The investigator reported the situation to the IRB, which required that subjects be given a revised approved consent form. We found one other problem in subject enrollment procedures used by an investigator, although in this case VA regulations were not involved. One subject who was incapacitated as a result of dementia was in a noninvasive study of abdominal aneurysms. Although the subject’s surrogate had provided consent,VA policy establishes protections for incapacitated subjects by prohibiting their enrollment in research that can be conducted with competent subjects. We encountered eight other cases in which surrogates enrolled incapacitated subjects in research, but we were unable to determine whether these cases were in accordance with VA’s policy. We found that five of the sites we visited did not implement certain required procedures for IRB review of research. For example, we found that studies at two sites were not reviewed by all necessary IRB members and four IRBs did not ensure timely or thorough continuing review of ongoing research. We found that two IRBs did not comply with VA regulations that research must be approved during properly convened meetings, either because meetings were held without a quorum or because the IRB chair improperly approved a high-risk study outside an IRB meeting. With the exception of certain categories of research involving minimal risk to subjects, VA regulations require IRBs to review research at convened meetings attended by a quorum, defined as a majority of members that includes at least one member whose primary concerns are in nonscientific areas. These regulations establish criteria for IRB meeting quorums to ensure that decisions about the protection of human participants in research reflect the consideration of diverse perspectives on research, including the views of scientists and nonscientists with a range of experience and expertise. These protections are undermined when initial review is not conducted in accordance with these requirements. Four of seven meetings held by one VA-run IRB between January 1998 and August 1999 were held without a quorum. As a result, 17 studies were initiated without legitimate IRB approval, including studies on new drug treatments for unstable coronary symptoms and pneumonia. We examined four to six sets of minutes from IRB meetings held at the other seven IRBs we visited and found that a quorum was present at each. We found one other instance in which requirements for approval of research at convened IRB meetings were violated. A university-run IRB considered a high-risk drug study for cardiac patients and determined that re-review would be necessary after the investigator addressed several concerns. IRB minutes stated that because the drug company sponsoring the research would have rejected their site if a time deadline were not met, the IRB chair approved the study before the IRB reconsidered it. Although there are circumstances under which an IRB chair can approve a study, in all such cases the research must have been found to pose only minimal risk to subjects. In this instance, the IRB had determined that the study posed a high degree of risk. On the other hand, our sample also included 16 other studies that met criteria for approval outside a convened IRB meeting. VA regulations allow such a procedure (called expedited review) for studies that pose only minimal risk to subjects and that fall into one of several categories of research. Under expedited review procedures, the IRB chair, or one or more experienced IRB members designated by the chair, are authorized to approve research. For example, IRB approval was expedited for a study on the effects of a weight loss program in which subjects would attend informational sessions about diet and weight loss and have their weight and health monitored using routine, minimal-risk procedures. We found that the IRBs we visited differed in the sufficiency of the written information they asked investigators to provide about human subject protections prior to review. VA regulations identify eight criteria that IRBs must assess before approving research (see fig. 4). Although VA regulations do not specify the information IRBs must review to assess these criteria upon initial review, much of the information can only be provided by investigators. Because offsite study sponsors often prepare the consent forms and protocols used in multisite studies, IRBs must have sufficient information to assess whether the local investigator can properly implement human subject protections. We found that information in IRB files did not always address all the criteria that must be satisfied for an IRB to approve a study. Of the sites we visited, only one university-run IRB routinely requested detailed information from local investigators about each criterion in its application forms. For example, two IRBs did not routinely ask local investigators any questions about risks or about plans for monitoring the safety of subjects. Similarly, IRBs differed in the information they had from investigators about special protections for subjects who are likely to be vulnerable to coercion or undue influence. VA regulations require that IRBs ensure that additional safeguards are in place to protect the rights and welfare of such subjects; however, the regulations do not specify the nature of such safeguards. We analyzed project files for 27 studies designed to address issues involving psychiatric conditions that can be associated with a diminished capacity for decision-making psychoses, mood disorders, and organic mental disorders such as dementia. We found that the investigator had included information about additional safeguards in applications for IRB approval in only about half of these studies. For example, we reviewed from two to six files for projects involving potentially vulnerable subjects at each site and found references to additional protections in most of the relevant project files we sampled at four sites. In contrast, at two other locations no such documentation was evident in any of the IRB files we reviewed for projects involving subjects with psychiatric disorders that could affect decision-making. Some sites have implemented procedures that afford special protections for some such subjects. Examples follow. Subjects at one medical center who are recruited for psychiatric research and whose mental illness can affect decision-making are typically tested for their comprehension of central consent issues before enrollment in a study. At another medical center, seriously mentally ill subjects who participate in studies involving a risk that their symptoms might worsen are monitored by a physician who is independent of the research and who is assigned responsibility for deciding whether the subject should remain in the study or be withdrawn. Alzheimer’s researchers at a third site have established research registries for potential subjects, who were still able to give consent, and their caregivers. By enrolling, subjects agree to allow medical information to be entered into a data bank and to be contacted about future studies. By agreeing to be contacted, however, potential subjects have not consented to participate in future studies. Because these potential subjects are recruited for future studies through registries, the risk of undue influence that occurs when physicians recruit their patients is minimized. Moreover, rules for these registries limit the number of researchers who may contact each person, ensure that potential subjects are recruited only for studies for which they are in fact eligible, and allow registry managers to conduct follow-up surveys to ensure that members of the registries are satisfied with the way researchers treat them. We found that three VA-run IRBs did not meet VA’s regulatory requirement that each study must be re-reviewed at intervals not to exceed 1 year.Regular re-review of a project and associated reports of problems allows an IRB to assess the ratio of risks to benefits on the basis of data obtained since the study began and to ensure that subjects are appropriately informed of those risks and benefits. We examined the dates of continuing review for 73 projects at 6 sites that had received initial approval more than 1 year before our visit. Of these projects, 54 (74 percent) had been reviewed on time within the past year. The median delay for the 19 projects that were not re-reviewed on time was about 1 month. At one VA-run site, only one of the nine projects we reviewed that were more than 1 year old had been re- reviewed on time. At another VA-run site, about half of the necessary continuing reviews from our sample were conducted within 1 year, but delays of up to 14 months occurred in the other half. The three university- run IRBs we visited achieved high rates of timely continuing review. Four VA-run IRBs we visited reviewed insufficient information when conducting continuing review. OHRP has stated that compliance with regulatory requirements for continuing review entails, at a minimum, IRB review of the study protocol and any amendments; the current consent form; the number of subjects who have been enrolled; and information relevant to risks, including adverse events, unanticipated problems involving risks to the subject or others, withdrawal of subjects from the study, complaints about the study, and a summary of any recent information relevant to risk assessment. Only half of the IRBs we visited required the investigator to submit the most recent version of the consent document or asked about subjects who have withdrawn (or been withdrawn) from the study. All eight IRBs required reports of the number of subjects who had participated and adverse events. IRB staff told us that reports of adverse events are difficult for IRBs to handle. Regulations require investigators to report to the IRB unanticipated problems involving risks to subjects, and IRBs must review adverse events reported by all sites where the study is being conducted. The concerns we heard on our site visits were similar to those described in several recent reports on difficulties that IRBs nationwide face when handling large numbers of adverse event reports in the absence of key information necessary for their interpretation.For example, reports of adverse events from drug studies do not indicate whether the subject who experienced the adverse event had received an experimental drug or a different treatment, such as a placebo. Regulatory bodies such as FDA and OHRP and research sponsors such as the National Cancer Institute have recently argued that adverse event reports from studies involving many subjects are often best handled by special committees called data and safety monitoring boards. These boards are typically established by research sponsors and include statisticians and other scientists who analyze data collected during the course of a clinical trial to detect risks to subjects. A few of the IRBs we visited were attempting to develop systems to track adverse events. Even when a data and safety monitoring board has been established to analyze adverse event reports associated with a study, it is not required to report its findings to IRBs. In VA these boards, referred to as data monitoring boards, analyze only those adverse events reported in multicenter studies funded by VA through a program called Cooperative Studies. If results indicate that a study protocol or consent form must be modified, reports are released by the coordinating center for that cooperative study. It sends such reports to investigators and to the associate chiefs of staff for research and development at participating medical centers, with instructions to share the information with IRBs. Reports are not submitted to IRBs directly. Similarly, VA’s policy manual does not require that reports from data and safety monitoring boards associated with non-VA-funded research be submitted to its IRBs or medical centers. VA’s policy manual also does not require investigators or IRBs to ascertain whether a data and safety monitoring board has been established for studies in which its investigators participate. IRBs at the eight facilities we visited met certain membership requirements, but two did not ensure that their members had no potential conflicts of interest. We also found problems involving the number of IRB staff or IRB space at five facilities. VA regulations require that IRBs have sufficient administrative staff and space to review research and preserve the confidentiality of files. VA regulations for IRB membership include requirements that IRBs have at least five members and must include a scientist, a nonscientist, and at least one person who is not otherwise affiliated with the institution. (Individual members may fulfill more than one criterion.) We checked IRB membership rosters from the eight facilities we visited and found that all met these requirements. In addition, VA regulations state that if the IRB regularly reviews research involving a vulnerable category of subjects, then consideration should be given to including at least one member who has experience working with that group. Each of the eight IRBs we visited included someone from the institution’s psychiatry, psychology, or other mental health department, allowing access to specialized expertise with regard to the potential vulnerabilities of mentally ill subjects. We also found that each of the university-run IRBs we visited had members who were on staff at the affiliated VA medical center. Inclusion of VA staff helps fulfill VA’s regulatory requirement that IRBs have knowledge of the local research institution, including the scope of research activities, types of subjects likely to be involved, and the size and complexity of the institution. Officials at the medical centers we visited that relied on the IRBs of university affiliates reported that the larger academic community of the university offered advantages for IRB membership, including a broader range of expertise and reduced potential for conflicts of interest because IRB members would be less likely to be research colleagues of investigators. In addition, because all VA investigators at these three medical centers also held faculty appointments at the university, investigators did not need to apply for IRB approval from both the university and VA. Officials at some of the medical centers that operated their own IRBs reported that the advantages of doing so included maintaining greater control over the research review process and the increased likelihood that the IRB would know particular investigators and veteran subjects. We found that two VA-run IRBs did not ensure that their members had no potential conflicts of interest. VA regulations state that no IRB may have a member participate in an IRB initial or continuing review of any project in which that member has a conflict of interest. Although we found that investigators who were IRB members appropriately abstained or recused themselves from voting on their projects, two IRBs had, as a voting member, the associate chief of staff for research and development for their medical centers. The duties of a VA medical center’s associate chief of staff for research and development include helping local investigators obtain intramural or extramural research funds. As noted by OHRP, such institutional officials thus have a potential conflict of interest in conducting IRB reviews. These two officials told us, however, that they believed their objectivity as IRB members was not compromised by their other responsibilities. Officials at four of the VA-run IRBs told us that they did not have adequate staff to support IRB operations, as required by VA regulations. IRB administrative staff provide crucial services such as reviewing applications for completeness, corresponding with investigators, and maintaining IRB records. In addition, some administrative staff serve on IRBs as experts on regulatory issues. The VA-run IRBs we visited typically had one or two IRB staff members who often had other responsibilities. For example, at one of these sites, where a single staff person worked part-time for an IRB that reviews 200 to 300 projects annually, the IRB chair reported that IRB activities, such as suggesting revisions to consent forms, were curtailed due to insufficient staff support. In May 2000, VA headquarters distributed preliminary estimates for the number of administrative IRB staff that a medical center should have. This guidance noted that staffing levels would vary with the breadth and complexity of the research program. ORD officials acknowledged that these benchmarks are a first approximation in an effort to identify appropriate staffing levels. In addition to staff, IRBs must have secure, private areas for the review and discussion of confidential materials. IRBs also need office space for the IRB chair and administrative staff, secure file storage, and computer support. We found that IRB administrative staff at three sites—two of them lacked sufficient space to conduct their work or store all IRB documents. For example, we observed IRB file folders stacked loosely on top of file cabinets and on floors at one of these sites. Six of the eight IRBs we visited did not maintain all the records required by VA regulations. Inadequate documentation does not, in itself, place subjects at risk. However, records of actions, deliberations, and procedures can help identify problems and corrective actions. Thus, documentary failures prevent appropriate monitoring and oversight activities. We found inadequate documentation in IRB files for about 9 percent of the ongoing projects we reviewed. For example, some files failed to include copies of all correspondence regarding IRB actions between the IRB and investigators, or copies of all approved consent forms. VA regulations require IRBs to retain these documents for at least 3 years after a study is terminated. Required documents were missing from one or more IRB files at five of the eight sites we visited. VA regulations require each facility to maintain written procedures that it will follow for conducting initial and continuing review, reporting IRB findings and actions to investigators and appropriate officials, and determining when special steps are necessary to monitor ongoing projects. Our review indicated wide differences between facilities in the adequacy of these documents. One VA-run facility has written procedures regarding criteria for exemption from IRB review and for use of expedited review procedures that are not in accordance with VA regulations. In addition, one medical center had been cited by the FDA for failure to have adequate written procedures in June 1999. The center agreed to have them in place by August 1999 but did not do so until December 1999. The written procedures available from three other VA-run IRBs did not include required descriptions of procedures for conducting project review, determining when additional monitoring of projects is necessary, or responding to investigator noncompliance. In contrast, the written procedures of the three university-run IRBs included all required procedures. We found one instance in which failure to have required written policies resulted in a further violation of VA regulations. Specifically, the previously discussed esophageal biopsy, which was conducted without consent, was not reported to the IRB or OHRP as required. VA regulations require institutions to ensure that “serious or continuing noncompliance”by investigators is reported to the IRB. A similar report must be filed with OHRP if the institution has an HHS-approved assurance, as did the medical center involved. The Associate Chief of Staff for Research and Development told us that he did not report the event to the IRB or OHRP because he followed the procedures for handling scientific misconduct outlined in VA’s policy manual. Nothing in the IRB’s project files for that investigator indicated a finding or report of noncompliance, imposition of any special restrictions or conditions for future research, or suspension or termination of research. We found that some IRB minutes did not comply with VA regulations, which require the minutes to include a record of actions, the basis for requiring changes in or disapproving research, and a written summary of discussions of controverted issues and their resolution. At each site, we reviewed from four to seven sets of minutes from IRB meetings held from December 1997 through October 1999. IRB actions were almost always clearly recorded in the minutes we examined at each site. Minutes from six facilities routinely included written summaries of discussions and reasons for actions. Two VA-run IRBs, however, rarely included substantive discussions of these matters in their minutes. Facilities also varied in their compliance with VA regulations about recording votes by IRB members during project review. The regulations state that minutes of IRB meetings must indicate the number of members voting for and against and the number of those abstaining. Two VA-run IRBs typically recorded votes as unanimous, and minutes from one other VA-run IRB recorded some votes as “approved,” without specifying vote totals. Without exact numbers, the presence of a majority of IRB members required during each vote cannot be confirmed. The voting records in minutes from the remaining IRBs we visited were generally in compliance with regulations. However, in one set of minutes from one site, we found that the total number of votes cast for each decision consistently exceeded the number of members listed in attendance. We identified three specific weaknesses in VA’s system for protecting human subjects: not ensuring that research staff have appropriate guidance, insufficient monitoring and oversight activity, and not ensuring that the necessary funds for human subject protections are provided. These weaknesses indicate that human subject protection issues have not historically received adequate attention from VA headquarters. VA headquarters has not provided the guidance necessary to ensure that its medical center staff are adequately informed about requirements for the protection of human research subjects. We found that VA did not develop a systemwide educational program, ensure that each of its facilities had an appropriate training program in place, or provide guidance about training to its facilities. We also found problems with the guidance VA provides about procedures for handling informed consent records. Efforts to protect the rights and welfare of human subjects are undermined when research staff have not been given clear, comprehensive guidance about human subject protections. VA headquarters officials told us that VA did not have a systemwide educational program devoted to human subject protection issues and that more training is needed. We found that three of the medical centers we visited had no educational program for IRB members, IRB staff, or investigators. From its October 1999 survey of VA field management, VA headquarters research officials learned that 12 of 22 Veterans Integrated Service Networksdid not have an adequate plan for the ongoing education of IRB members, IRB administrative staff, or investigators about the regulatory requirements for protecting human subjects. In particular, medical centers with small research programs identified difficulties in establishing educational programs. Those facilities that had programs often reported that their university affiliates ran the training programs. A need for increased educational guidance from headquarters was one of the most commonly identified issues regarding human subject protections in the survey. OHRP and HHS’s Office of Inspector General have stressed that educational programs are critical to ensuring that IRBs comply with regulations and are able to assess the acceptability of research proposals in light of those regulations and to ensuring that investigators understand their responsibilities to protect human subjects. On the other hand, two VA-run IRBs and the three university-run IRBs we visited have implemented their own educational programs for both investigators and IRB members and staff, generally without guidance from headquarters. These programs included training new IRB members, devoting a portion of IRB meetings to discussion of issues involving the protection of human subjects, having some IRB members and staff attend national conferences about IRB operations, and instituting a certification program for investigators. Although we did not evaluate the adequacy of these programs, one of these sites, a university affiliate, developed an educational program that has been cited by HHS’s Office of Inspector General as a best practice for training in human subject protection issues. In addition to finding that VA did not have a systemwide educational program, we found problems with VA guidance for documenting consent to participate in research. VA’s policy manual includes two requirements that go beyond its regulations for the protection of human subjects: (1) the original signed consent form is to be placed in the subject’s medical record and (2) investigators are to use a standard template developed by VA to obtain consent. A VA official in ORD told us that the purpose of requiring the placement of signed research consent forms in medical records is to ensure that treating professionals are aware of relevant medical information. He acknowledged, however, that consent forms in medical records are not always readily accessible to treatment staff because they may be housed in old volumes of medical records maintained in storage areas. He also noted that medical records personnel at some VA medical centers have discarded consent forms rather than filing them. Our findings confirmed this. We were unable to locate consent forms in 20 percent of 187 medical records we reviewed at 7 of the 8 medical centers we visited. The remaining medical center we visited recently developed a system for scanning signed consent documents into its electronic medical records. However, these consent forms were not located in a part of the electronic record that would be routinely accessed by treating personnel. Some medical center research staff suggested that placing a synopsis of each study in a prominent place within subjects’ medical records would ensure that treating professionals know about relevant research participation, thus minimizing risks to subjects. We observed such a strategy at the Denver VA Medical Center, where a special flag in each subject’s electronic medical record links the reader to a brief summary of the study and to any investigational drugs involved. VA has not implemented a systemwide procedure for indicating research involvement in electronic medical records. Another area of concern is VA’s standard template for informed consent. This template includes space for investigators to enter study-specific information and exact language for requirements common to all consent forms. VA’s policy manual requires all VA investigators to use this form. We identified several problems with this template. The template does not reflect the regulatory requirement that a contact be provided for subjects to call with questions about their rights as research participants. For studies conducted at both VA and non-VA locations, use of the VA template created problems. In these cases, adherence to VA’s policy requires development and IRB approval of two consent forms—one based on VA’s template and one for the other location. Failure to use an appropriate IRB-approved consent form in these dual-form studies was the reason subjects signed nonapproved forms in 10 of the 33 cases previously discussed. VA has not provided clear guidance about the role of a witness to the consent process. Under VA regulations, a witness signature is needed only when the elements of informed consent have been presented orally. We found only 1 study in our sample of 146 in which consent was obtained orally. However, we found that 405 of the 540 signed consent forms we examined had been signed by a witness. OHRP guidance indicates that a witness to a subject’s consent to participate in research may be appropriate when aspects of the study create concerns about the enrollment process. In such cases, an independent witness can provide a valuable check on the consent process to certify, for example, that key information was properly conveyed and that subjects were not unduly coerced into participation. On the other hand, such a witness can represent an unnecessary intrusion into a potential subject’s privacy. VA’s consent template includes a line for the signature of a witness, without specifying who may serve as a witness, what the witness is attesting to, or the circumstances under which the witness is needed. Similarly, VA’s policy manual lacks guidance about who should serve as a witness or what that person’s role is. We found that VA did not have an effective system for monitoring protections of human subjects. Several instances follow. VA headquarters and affected medical centers were generally unaware of regulatory investigations and impending actions by OHRP or FDA against university-run IRBs until after the regulatory sanctions were applied. VA was unable to ensure that FDA could notify VA of planned inspections and provide copies of post-inspection correspondence because VA was unable to provide FDA with a list of its university-run IRBs until July 2000. VA did not have a complete list of those medical centers that used their own IRBs, relied on a university-run IRB, or were covered by an OHRP assurance until July 2000. Until OHRP’s regulatory action against the West Los Angeles VA Medical Center, VA was unaware that each of its facilities was required to provide a written assurance that it will comply with all federal regulations regarding the protection of human subjects. Written assurances facilitate proper oversight by ensuring documentation of core agreements between VA headquarters and IRBs. They also can provide evidence of knowledge of the regulations governing human subject protections and demonstrate an institution’s commitment to those protections. When VA subsequently obtained these assurances, it did not require medical centers to submit local written procedures for implementing human subject protections, as the regulations required. Review of written procedures can indicate gaps or errors in required local policies and procedures. VA headquarters has not provided medical centers with guidance in ensuring access to minutes or other key information when they arrange for the services of a university-run IRB. As a result, one medical center we visited did not have access to the minutes of its university-run IRB, and two medical centers affected by regulatory sanctions against their affiliated universities had not monitored IRB minutes to assess compliance with regulations. Furthermore, we found that VA headquarters and medical centers we visited did not effectively monitor investigators and their studies. Specifically, only one of the eight medical centers we visited checked whether investigators provided subjects with the correct IRB-approved consent form. That medical center recently began checking one signed consent form from each study as part of its continuing review. In addition, the files of one university-run IRB we visited did not correctly identify which researchers at the VA medical center were responsible for the studies the IRB had approved because the medical center required that department chairs rather than researchers be listed as principal investigators. Responsibility for funding human subject protections at medical centers is diffused across several decisionmakers, each of whom may also have competing priorities for the same funds. As a result, no one official is responsible for ensuring that medical center research programs have the resources necessary to support IRB operations and provide training in human subject protections. Although VA has not determined the funding amounts needed for human subject protection activities at the medical centers, research officials at five of the eight medical centers we visited told us that they had insufficient funds to ensure adequate operation of their human subject protection systems. We found that medical centers typically relied on several sources of funds to support the indirect costs of research, which include human subject protection activities. These sources included VA’s research appropriation, VA’s medical care appropriation, and non-VA research sponsors such as NIH or pharmaceutical companies. Different decisionmakers control the funds potentially available to a medical center from these sources. The medical center’s associate chief of staff for research and development controls the portion of the research appropriation targeted for the indirect costs of research. The medical center’s director controls the portion of the medical care appropriation allocated for indirect costs of research. Funds from non-VA research sponsors are generally held by a medical center’s nonprofit research foundation and are controlled by its board of directors, which has discretion over their use. As a result, responsibility for ensuring that human subject protections are adequately funded at each medical center is diffused across several decisionmakers. In addition, the decisionmakers at some of the medical centers we visited told us that they did not allocate additional funds for human subject protection activities because they had to consider those needs against the competing priorities of research support and medical care delivery. Headquarters research officials confirmed that these organizational tensions have created a situation in which there is no clear focus of responsibility for funding human subject protection activities at medical centers. One of the indirect costs of operating an IRB is the time spent by IRB chairs and members meeting their IRB responsibilities. Headquarters research officials told us that providing release time for IRB chairs and members has been a long-standing problem. VA staff at the medical centers we visited conduct their IRB activity as a collateral duty. We were told that the time commitment for members, and particularly for IRB chairs, is significant. Chairs and members spend time reviewing protocols before meetings, corresponding with investigators, attending IRB meetings, and preparing and reviewing documentation. We were told that the lack of release time made it difficult to recruit and retain IRB chairs and members. We found one instance in which a university paid VA to subsidize the costs of covering the emergency room duties of a VA physician who chaired an IRB that VA used. In another instance, a research official at one medical center told us that IRB meetings are held in the evening and that the nonprofit foundation pays IRB members. This arrangement allows members to fulfill their primary VA obligations during the day without the collateral responsibility of serving on the IRB. Research officials at five of the eight medical centers we visited reported that they had insufficient funds to ensure adequate operation of their human subject protection systems. Of particular concern, officials told us, was that lack of funds prevented hiring and training staff. Officials from some medical centers also told us that their nonprofit research foundations recognized that the level of VA funding for IRB operations was inadequate, and therefore contributed varying amounts of funds for specific local needs, such as training investigators in human subject protections or hiring IRB staff. For example, one nonprofit contributed $25,000 in fiscal year 2000 to support investigator training in human subject protections. Some VA nonprofit foundations and universities are charging private industry sponsors a fee for IRB review of their projects to help support IRB operations. However, headquarters research officials told us that VA has not determined the funding amounts needed for human subject protection activities at the medical centers. They said that such a determination is necessary for planning funding levels and ensuring that human subject protection activities are appropriately funded. Substantial corrective actions have been implemented at three medical centers in response to sanctions by regulatory agencies against their human research programs. These steps represent progress in meeting the requirements imposed by regulators and VA management, and each of the facilities, despite some difficulties, has resumed human research activities. VA has, however, been slow to identify systemwide deficiencies and to obtain information needed to step up oversight of human subject protection systems at its medical centers. Nonetheless, VA’s recent responses, such as establishment of the Office of Research Compliance and Assurance (ORCA) to monitor human subject protections at individual medical centers and across the system, are promising. The three medical centers and their affiliated universities we visited that had actions taken against them by regulators—West Los Angeles, Chicago Westside, and Denver—have made progress in implementing substantial changes to their human subject protection systems. Their written procedures appear to be in compliance with regulations, and their staffing levels seem reasonable for the workload. These medical centers and their affiliated universities, along with two others, had been affected by serious regulatory sanctions. Regulators found numerous problems at these institutions, including failure to obtain informed consent, failure to conduct adequate and timely continuing review of research, and failure to have adequate written IRB policies and procedures. OHRP deactivated West Los Angeles VA Medical Center’s multiple project assurance with HHS on March 22, 1999. It restricted the assurance held by the University of Illinois at Chicago, which served as the IRB of record for Chicago Westside VA Medical Center on August 27, 1999. On September 13, 1999, FDA suspended certain research projects at a consortium of six Colorado research institutions, including the Denver VA Medical Center. The University of Colorado, the location of the consortium’s IRB, suspended research with human subjects at all six sites in response to a letter from OHRP dated September 22, 1999, which raised concerns about IRB noncompliance with regulations. On December 17, 1999, OHRP restricted the multiple project assurance with Virginia Commonwealth University, which had been the IRB of record for the Richmond VA Medical Center. FDA had issued a warning letter to the university several months earlier about the IRB operations. On January 19, 2000, OHRP restricted the multiple project assurance with the University of Alabama at Birmingham, which was the IRB of record for the Birmingham VA Medical Center. There were three immediate responses in West Los Angeles, Chicago, and Denver to the sanctions imposed by regulatory agencies: a suspension of enrollment of new subjects in almost all research projects; an assessment of the appropriateness of the continued participation of previously enrolled subjects; and a determination by VA headquarters and affiliated universities of actions needed to improve human subject protection programs at each site. Each medical center or affiliated university that we visited then made extensive changes to its human subject protection system. These changes involved reconstituting IRBs; increasing the number of IRB administrative staff; training IRB members, staff, and investigators in the principles and procedures of human subject protection; creating or extensively revising IRB procedures; increasing working space for IRB operations; creating new databases for tracking protocols through the review process; re-reviewing projects; and resuming research activities. As of February 2000, all projects at the West Los Angeles VA Medical Center had been re-reviewed by an IRB. As of June 2000, all projects for the Chicago Westside VA Medical Center had been submitted to university-run IRBs for re-review, and as of July 2000, all projects had been re-reviewed for the Denver VA Medical Center. The Denver VA Medical Center’s IRB has been informed by OHRP and FDA that as of June 2000, its corrective actions are appropriate. On July 18, 2000, OHRP removed the restriction on the University of Illinois at Chicago stating that the university has developed and implemented an improved system for the protection of human subjects in research and has adequately completed all required actions. Responses varied across sites, however, because of differing responsibilities for IRB operations and site-specific problems that needed to be addressed. For example, at the West Los Angeles VA Medical Center, which operated its own IRB, VA headquarters and medical center officials made extensive changes in research personnel responsible for human subject protections. From April 1999 to the time of our visit in March 2000, about 50 employees had been rotated through the program with a few assigned full-time to support research and development and IRB operations. The university affiliated with the Chicago Westside VA Medical Center hired a nationally known expert in human subject protections to lead a comprehensive restructuring of its IRB operations. We identified two issues of concern at the West Los Angeles VA Medical Center. First, VA’s authorization of a resumption of IRB operations at West Los Angeles on April 19, 1999—less than 1 month after OHRP’s deactivation of its multiple project assurance—was premature. At that time, the medical center still lacked approved, written procedures for operation. Such procedures are required by regulations. It also was relying on untrained administrative staff to assist the newly formed IRBs. Furthermore, VA’s investigators had not been trained in human subject protection issues. Our second issue of concern is that officials at the West Los Angeles VA Medical Center were particularly slow to respond to OHRP’s requirements. In its 1999 letter deactivating the medical center’s multiple project assurance, OHRP noted the medical center’s continued lack of responsiveness to issues raised by OHRP over a 5-year period. For example, in 1994, OHRP required that the medical center establish a data and safety monitoring board to oversee studies involving subjects with severe psychiatric disorders. It took until February 2000 for medical center officials to approve standard operating procedures for the data and safety monitoring board and to hire its staff.In another instance, OHRP cited the medical center in 1995 for a lack of adequate written procedures for human subject protections. However, it took the medical center until February 2000 to develop and approve these procedures. Similarly, in 1995, OHRP strongly recommended that medical center officials develop an ongoing training program for investigators. Medical center officials told us they plan to begin such training in September 2000. At the Chicago Westside VA Medical Center, we found that, in permitting the continued participation of previously enrolled subjects in some projects, VA and the university-run IRBs did not ensure that continuing review requirements were met for these projects. When we raised this issue with officials during our February 2000 visit, they acknowledged this lack of oversight. They have since required investigators for these projects to submit materials for continuing review. We found that the Chicago Westside VA Medical Center did not play an active role in assisting its university-run IRBs to improve its human subject protection system. The medical center organizational chart for research and development did not show any linkage with the three university IRBs. The medical center had only one representative among the 18 members of the biomedical IRB and one on the 17-member combined biomedical- behavioral IRB. There were no VA representatives on the third IRB, an IRB that reviewed behavioral studies because, as officials told us, VA conducted few such studies. At the time of our visit to the medical center—over 5 months after the OHRP action—the medical center had done little to improve its communication with the IRBs despite the recommendation to do so made by the VA headquarters site visit team in September 1999. Although one local VA research official participated on a university committee charged with prioritizing studies for re-review and made suggestions to modify the IRB form used by investigators to submit protocols for review, the medical center had not established a mechanism for routine contact with and monitoring of the IRBs. In addition, the medical center was unaware of VA protocols being submitted for IRB review, IRB actions to approve or disapprove continuation of studies, and serious adverse events that could affect veterans who were subjects of research. At the time of our visit, the medical center was unable to provide us with reliable data on which investigators had been trained by the university in human subject protection regulations and issues. Furthermore, as of July 2000, the medical center had not responded to a May 2000 request from the university for comments on their new IRB procedures manual. In contrast, the Denver VA Medical Center established mechanisms to enhance communication between the research and development program and its three university-run IRBs by having regular meetings and increasing the number of VA personnel on the IRBs. As of June 2000, the chair of one of the university-run IRBs and the co-chair of another were VA employees. Five other VA employees served as members of the IRBs. Medical center personnel were working closely with their counterparts in the university to design a database that would allow VA research officials access to VA project information at the university-run IRBs. When the IRBs at their affiliated universities faced sanctions by regulatory agencies, officials at the Richmond and Birmingham medical centers chose to establish their own IRBs. They told us they did so to increase their control over the research review process. These officials told us they each created an IRB, developed written procedures, trained IRB members, and resumed their research programs after re-reviewing their projects. In addition, the Birmingham VA Medical Center has trained investigators and IRB staff, and the Richmond VA Medical Center has trained research staff. VA has been slow to recognize and address systemwide deficiencies in its human subject protection activities. Although OHRP identified problems with human subject protections at the West Los Angeles VA Medical Center in 1994, VA did not have a plan to address systemwide concerns involving research until July 1998. VA did not begin to implement systemwide changes until after OHRP took regulatory action against the medical center in March 1999. VA’s initial responses to regulators’ actions affecting the West Los Angeles VA Medical Center and other medical centers were crisis- driven and site-specific. Specifically, headquarters formed teams that conducted site visits to determine actions needed at the affected medical centers. Headquarters monitored corrective actions at the medical centers primarily through an exchange of reports and correspondence. In July 1998, VA developed a plan to reorganize its field research operations. This plan addressed a variety of research concerns including the involvement of human subjects and the ethical conduct of studies. Only recently, however, has VA headquarters begun to implement systemwide changes to improve its human subject protections. Its steps have included providing information to investigators and research staff, obtaining information about medical centers’ research programs, and making organizational changes to enhance monitoring and oversight of research involving human subjects. These steps have been slowly implemented, but they provide a promising foundation for improvements to protections for human subjects in VA research. VA headquarters officials have taken several steps to provide information to VA investigators and local research staff about human subject protections. The initial information provided by ORD described issues at affected medical centers. It was not until October 1999 that ORD provided medical centers with specific actions that could be helpful in strengthening their human subject research programs. Starting with its May 1999 bimonthly conference call with associate chiefs of staff for research and development, ORD began discussing human subject protection issues in light of the March 1999 OHRP action against the West Los Angeles VA Medical Center. Also in May 1999, they began to plan a series of educational programs for investigators, IRB members, research administrators, and medical center directors focused on human subject protection issues. In October 1999, ORD held a nationwide videoconference in which OHRP and VA research officials discussed human subject protection issues and answered questions from VA staff. Also in October 1999, ORD began to list on its Web site human subject protection information available through OHRP and other organizations and distributed a summary of lessons learned from institutions that had been affected by recent sanctions by regulatory agencies. ORD officials told us they expect to complete a draft of a revised policy manual for VA research by September 2000. ORCA officials have also implemented initiatives. For example, it began bimonthly teleconference calls in February 2000 with IRB and research officials at medical centers to share information and obtain input on human protection issues. In March 2000, ORCA issued its first newsletter to local research officials. This educational newsletter, planned as a twice a month series, will address informed consent and human subject protection issues. In April 2000, ORCA convened a group of VA research staff and outside experts in human subject protections to identify training courses developed elsewhere that VA could use. The group also plans to develop guidance and strategies for VA to use to train IRB staff, members, and investigators. Beginning in May 2000, ORCA sent the first of three notices to local research programs alerting them of current human subject protection concerns. In June 2000, it began issuing a monthly set of news clippings on human subject protection issues. In 1999, VA’s National Center for Ethics sponsored a conference on ethics in research and issued related reports including a discussion of principles for researchers’ consideration on the principles guiding the ethical conduct of research involving participants with impaired capacity to consent. VA is participating in national efforts to develop policies and procedures for protecting these participants. VA headquarters officials have acknowledged that they lacked key information about research programs at medical centers. To obtain more accurate and complete information, they have taken several steps. Examples follow. In October 1998, VA research officials began to develop a new computerized data system to improve the comprehensiveness and accuracy of data about studies involving human subjects at VA medical centers. As of June 2000, development was still under way. In April 1999, VA asked its medical centers whether they operated their own IRB or relied on the IRB of an affiliated university. VA also asked whether assurances with OHRP were involved. ORCA finished verifying this information in July 2000. In October 1999, ORD sent a questionnaire to the director of each Veterans Integrated Service Network to assess the adequacy of staffing and support for human subject protections at the medical centers in each network. A lack of adequate resources was one of the three most common problems identified. Sixteen of the 22 networks reported inadequate IRB support, including staff, space, and equipment. Fourteen networks identified education as a priority issue and cited the need for educational opportunities and guidance documents. In May 2000, headquarters sent information to the networks on educational opportunities and made suggestions for the level of administrative staffing of IRBs. By February 2000, VA had accepted an assurance from each medical center conducting human research that it would comply with regulations for the protection of human subjects. In April 2000, VA’s Chief Financial Officer reported that VA would implement a system to allow for the explicit accounting of funds from the medical care appropriation that are used by medical centers to support the indirect costs of research. These steps are necessary to obtain key information about human subject research programs at medical centers. This information will allow headquarters officials to determine the additional steps that may be needed locally or systemwide to ensure compliance with regulations and the protection of human subjects. VA is implementing two organizational changes to enhance its monitoring and oversight of human research programs. The Under Secretary for Health announced these changes in April 1999, but as of August 2000, they had not been fully implemented. They are designed to allow routine onsite monitoring of research programs, thereby helping medical centers identify weaknesses and develop strategies to improve compliance with regulations and the protection of human subjects. Although promising in concept, it is too soon to determine whether the initiatives described below will fulfill their objectives. In April 1999, VA announced the creation of ORCA. VA did not begin staffing this office until it appointed the chief officer in December 1999. VA plans that ORCA will have eight headquarters staff by September 30, 2000, and four regional offices with four staff each by December 31, 2000. As of July 2000, VA had not completed its staffing of the headquarters component and had not filled any regional office positions. Although ORCA’s specific plans for monitoring medical center research activities were still under development in summer 2000, officials told us that they planned to conduct a site visit on a rotating basis to each medical center conducting human research. As of July 2000, ORCA officials told us they had not developed a specific schedule for conducting these visits, but they expect to do so when the regional offices are staffed. ORCA’s headquarters has a budget of $600,000 for fiscal year 2000 and $1.5 million for fiscal year 2001. The regional offices have a budget of $1.9 million for fiscal year 2000 and $2.3 million for fiscal year 2001. In August 2000, VA awarded a $5.8 million, 5-year contract for external accreditation of its IRBs. This contract requires the contractor to conduct a site visit every 3 years to each medical center conducting human research. The contractor is expected to review IRB performance and to assess its compliance with regulations. VA officials told us that VA expects that the university-run IRBs it uses will grant access to the accreditation team. VA is the first research organization to have an external accreditation of its human research programs. VA has not ensured that its medical centers have fully implemented required procedures for the protection of human subjects. Primary responsibility for implementation of these protections lies with local institutions medical centers and their IRBs. Although we cannot generalize from our sample to the universe of VA research institutions, we found sufficient evidence of noncompliance with applicable federal regulations to be concerned. We also found that incomplete access to information about adverse events experienced by research participants made it difficult for IRBs to fulfill their mandate. We found widespread weaknesses in the management of human subject protections that VA had not identified because of its low level of monitoring. VA’s past failure to ensure that its research facilities had the resources, including staff, training, and guidance, needed to accomplish their obligations suggests that headquarters has not given attention or sufficient priority to the protection of human subjects. Despite a 5-year record of problems at the West Los Angeles VA Medical Center, VA did not begin to implement systemwide improvements until OHRP took regulatory action against the medical center. VA’s initial actions were primarily crisis-driven and site-specific. Generally, appropriate corrective actions have now been implemented at each of the three medical centers we visited that were affected by regulatory sanctions. However, VA’s progress on systemwide improvements to its human subject protection system has been slow. VA only recently began to obtain the information it needs—such as identifying which medical centers use their own IRBs and which rely on university-run IRBs improvements. Some facilities we visited and projects we reviewed appeared to have reasonably strong protections for the rights and welfare of participants. VA’s recent efforts to improve its human subject protections systemwide and its commitment to developing an effective oversight and monitoring system are important steps toward ensuring that all VA facilities meet requirements, but it is too soon to determine how well these initiatives will fulfill their objectives. VA has a long history of important contributions to medical research, and it could set important precedents in improving human research protections. For example, VA is the first federal agency to take action to externally accredit its IRBs. Whether VA medical centers establish their own IRBs or work with university-run IRBs, VA needs to ensure that the IRBs have adequate resources, and VA must exercise its oversight authority if it is to know what guidance, preventive efforts, or corrective actions are needed. To strengthen VA’s protections for human subjects, we recommend that the Acting Secretary of Veterans Affairs direct the Under Secretary for Health to take immediate steps to ensure that VA medical centers, their IRBs— whether operated by VA or not—and VA investigators comply with all applicable regulations for the protection of human subjects by providing research staff with current, comprehensive, and clear guidance regarding protections for the rights and welfare of human research subjects; providing periodic training to investigators, IRB members, and IRB staff about research ethics and standards for protecting human subjects; developing a mechanism for handling adverse event reports to ensure that IRBs have the information they need to safeguard the rights and welfare of human research participants; expediting development of information needed to monitor local protection systems, investigators, and studies and to ensure that oversight activities are implemented; and determining the funding levels needed to support human subject protection activities at medical centers and ensuring an appropriate allocation of funds to support these activities. In written comments (see app. II) on a draft of this report, VA agreed with our findings and recommendations. VA said that initiatives it has already planned and implemented will provide a foundation for a national prototype in effective human subject protections. Although VA agreed that its implementation of a systematic approach to human subject protections has been slow to develop, it provided clarification regarding statements in the draft report that VA had not focused attention on systemwide weaknesses until after the March 1999 regulatory action at the West Los Angeles VA Medical Center. VA stated that planning for the establishment of regional offices for risk management and research compliance had begun almost 1 year earlier. We have modified the report accordingly. In concurring with our recommendations to provide research staff with current, comprehensive, and clear guidance and training about human subject protections, VA identified initiatives planned or under way to improve its guidance, disseminate the guidance, and train research staff in its use. These initiatives represent promising efforts. Whether VA’s plans for guidance and training are effective will depend upon implementation details. VA must ensure that its research staff have access to and receive current guidance and training to enable them to meet their obligations to protect the rights and welfare of human research subjects. VA agreed with our recommendation to improve adverse event reporting and said it has expanded the distribution of reports from its data monitoring boards to include all appropriate IRBs. VA has also indicated its intention to participate in governmentwide efforts to address this matter. These are important first steps in ensuring that IRBs have the information they need to safeguard the rights and welfare of human subjects. However, because the VA monitoring boards analyze only those adverse events reported in VA’s multicenter Cooperative Studies program, further efforts to address reports of adverse events from other studies are necessary. VA also concurred with our recommendation to improve monitoring and oversight of human subject protection activities and identified several activities it has planned or implemented, such as external accreditation of IRBs and establishment of performance measures related to human subject protections for medical center research officials. Oversight and monitoring are essential if VA is to know whether the procedures at its medical centers and affiliated universities comply with human subject protection regulations. Whether the actions VA plans to take in this area will be sufficient depends on how effectively they are implemented. Finally, VA concurred with our recommendation to determine the funding levels needed to support human subject protection activities at medical centers and then ensure an appropriate allocation of funds to support these activities. VA’s response notes that it has begun to account for medical center expenditures associated with research support—an important first step toward determining necessary funding levels. However, VA did not discuss how it would ensure that funds are appropriately allocated to human subject protection activities. As we noted, organizational tensions within VA have created a situation in which there is no clear focus of responsibility for funding such activities at medical centers. Until this is addressed, we are concerned that VA cannot ensure that human subject protections will be appropriately funded. VA officials also provided technical comments, which we incorporated where appropriate. We are sending this report to the Honorable Hershel W. Gober, Acting Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-7101 if you or your staff have any questions. An additional GAO contact and the names of other staff who made major contributions to this report are listed in app. III. Our objectives were to (1) assess the Department of Veterans Affairs’ (VA) implementation of human subject protections, (2) identify whether weaknesses exist in VA’s system for protecting human subjects, and (3) assess VA’s actions to improve human subject protections at those sites affected by sanctions imposed by regulatory agencies and throughout VA’s health care system. To achieve these objectives, we reviewed VA, Food and Drug Administration (FDA), and Department of Health and Human Services (HHS) regulations and VA policies for the protection of human subjects; interviewed VA research officials; visited selected VA medical centers to assess local implementation of these standards; and visited VA medical centers affected by research restrictions. We also interviewed officials from the Office for Human Research Protections (OHRP) and reviewed HHS guidance. We reviewed records of congressional hearings; reports on human subject protections, including those issued by the HHS Office of Inspector General, the Institute of Medicine, and the National Bioethics Advisory Commission; and the literature on the history of human subject protections. To assess VA’s implementation of human subject protections, we conducted site visits at eight VA medical centers: Atlanta, Ga.; Baltimore, Md.; Cleveland, Ohio; Dallas, Tex.; Louisville, Ky.; Providence, R.I.; Seattle, Wash.; and Washington, D.C. We selected sites to reflect major differences in VA research programs (see table 1). First, we selected medical centers that differed in the number of studies they conduct with human subjects. Second, we selected medical centers that differed in the institutions responsible for operating the committee tasked with reviewing each study to assess its protections for human subjects board (IRB). Third, we selected facilities that differed in the assurance arrangements they had with OHRP. Some institutions had filed a legally binding commitment to comply with federal regulations called a multiple project assurance with OHRP; other institutions had not. Our results from these eight medical centers cannot be generalized to other sites. At each site, we interviewed local research personnel, including the associate chief of staff for research and development, the IRB chair, and staff responsible for providing administrative support to the IRB. We attended an IRB meeting at six sites (Atlanta, Cleveland, Dallas, Providence, Seattle, and Washington, D.C). We also reviewed written procedures describing how the IRB and institution implement human subject protections and a sample of four to seven sets of IRB minutes from the last 2 years (December 1997 through October 1999) at each site. We randomly selected a sample of 15 to 22 projects at each site for detailed analysis. To ensure that our selection included research on potentially vulnerable participants, we oversampled studies designed to provide information about psychiatric conditions that can affect decision-making capacity, such as dementia, schizophrenia, and depression. Up to one- fourth of the studies we sampled at any one site were in this category. We examined IRB records for each project in our sample (146 in all, including 27 psychiatric studies). For the subset of 138 studies that required written consent, we reviewed the most recently approved consent form.To determine whether subjects had signed appropriate consent forms indicating willingness to participate in research and whether those forms were available as required, we examined about 5 signed consent forms maintained in investigators’ files from each of 125 studies. We also tried to obtain about two signed consent forms from each project in paper medical records. This sample included 98 projects. Some medical records could not be made readily available to us. For example, some medical records were at a different location during our visit. To assess corrective actions at VA medical centers in response to restrictions on their human research programs, we conducted 2-day visits to three facilities where human research was suspended Chicago Westside, Ill.; Denver, Co.; and West Los Angeles, Ca.Our site visit team included an expert in human subject protections under contract to us. For each of these sites, we examined the OHRP and FDA reports associated with the restriction of human research, action plans for resolving identified problems, documents regarding current human subject operations, and the status of the research program and human subject protections at the time of our visits (February 2000 and March 2000). We discussed these matters with medical center officials and officials from IRBs at affiliated universities when they were involved. In addition, we reviewed documents and interviewed officials from two other medical centers Birmingham, Ala., and Richmond, Va. These facilities were also affected when the IRBs of their affiliated universities were cited for noncompliance with federal regulations. Both have now established their own IRBs. We conducted our work between June 1999 and August 2000 in accordance with generally accepted government auditing standards. Cheryl Brand, Kristen Joan Anderson, Jacquelyn Clinton, Patricia Jones, and Janice Raynor also made key contributions to this report. In addition, Barry Bedrick and Julian Klazkin provided advice on legal issues, and Deborah Edwards provided advice on methodological issues. MedicalRecordsPrivacy:AccessNeededforHealthResearch,But OversightofPrivacyProtectionsIsLimited(GAO/HEHS-99-55). MedicalRecordsPrivacy:UsesandOversightofPatientInformationin Research(GAO/T-HEHS-99-70). ScientificResearch:ContinuedVigilanceCriticaltoProtectingHuman Subjects(GAO/T-HEHS-96-102). ScientificResearch:ContinuedVigilanceCriticaltoProtectingHuman Subjects(GAO/HEHS-96-72). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
|
Pursuant to a congressional request, GAO reviewed rights and welfare of veterans who volunteer to participate in research at Department of Veterans Affairs (VA) and the effectiveness of its human subject protection system, focusing on: (1) VA's implementation of human subject protections; (2) whether weaknesses exist in VA's system for protecting human subjects; and (3) VA's actions to improve human subject protections at those sites affected by sanctions applied by regulatory agencies and throughout VA's health care system. GAO noted that: (1) VA has adopted a system of protections for human research subjects, but GAO found substantial problems with its implementation of these protections; (2) medical centers GAO visited did not comply with all regulations to protect the rights and welfare of research participants; (3) among problems GAO observed were failures to provide adequate information to subjects before they participated in research, inadequate reviews of proposed and ongoing research, insufficient staff and space for review boards, and incomplete documentation of review board activities; (4) GAO found relatively few problems at some sites that had stronger systems to protect human subjects, but GAO observed multiple problems at other sites; (5) although the results of GAO's visits to medical centers cannot be projected to VA as a whole, the extent of the problems GAO found strongly indicates that human subject protections at VA need to be strengthened; (6) three specific weaknesses have compromised VA's ability to protect human subjects in research; (7) VA headquarters has not provided medical center research staff with adequate guidance about human subject protections and thus has not ensured that research staff have all the information they need to protect the rights and welfare of human subjects; (8) insufficient monitoring and oversight of local human subject protections have permitted noncompliance with regulations to go undetected and uncorrected; (9) VA has not ensured that funds needed for human subject protections are allocated for that purpose at the medical centers, with officials at some medical centers reporting that they did not have sufficient resources to accomplish their mandated responsibilities; (10) to VA's credit, substantial corrective actions have been implemented at three medical centers in response to sanctions by regulatory agencies taken against their human research programs, but VA's systemwide efforts at improving protections have been slow to develop; (11) medical centers affected by sanctions have taken numerous steps to improve human subject protections; and (12) VA has, however, been slow to take action to identify any systemwide deficiencies and obtain necessary information about the human subject protection systems at its medical centers.
|
DOD’s mission is to provide the military forces needed to deter war and protect the security of the United States. As shown in figure 1, the discretionary budget authority requested by DOD for fiscal year 2014 comprises about 53 percent of the budget request for discretionary programs throughout the federal government. DOD’s approximately $606 billion in requested fiscal year 2014 funding includes $526.6 billion in spending authority for departmental operations and $79.4 billion to support overseas contingency operations, such as those in Afghanistan. The Budget and Accounting Procedures Act of 1950 and FMFIA placed primary responsibility for establishing and maintaining internal control on the heads of federal agencies. FMFIA requires executive agencies to establish internal controls that reasonably ensure that obligations and costs comply with applicable law; that all assets are safeguarded against waste, loss, unauthorized use, and misappropriation; and that revenues and expenditures applicable to agency operations are recorded and accounted for properly. Those internal controls that pertain to the obligation, disbursement, and receipt of agency funds, as well as the recording of those transactions, are referred to in this report as funds control. Internal control is an integral component of an organization’s management that when properly implemented and operating effectively, provides reasonable assurance that the following objectives are being achieved: (1) effectiveness and efficiency of operations, (2) reliability of financial reporting, and (3) compliance with laws and regulations. Within this broad framework, DOD must design and implement effective controls, including funds control, and internal control over financial reporting. Auditors of DOD’s financial statements are to assess the effectiveness of these controls as part of a financial statement audit. Because budgetary information is widely and regularly used for management decision making on programs and operations, the Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer, commonly referred to as the DOD Comptroller, designated the SBR as DOD’s first priority for improving its financial management and achieving audit readiness. The financial information in the SBR is predominantly derived from a federal entity’s budgetary accounts, which are used by the entity to account for and track the use of public funds in accordance with budgetary accounting rules. The SBR provides information about budgetary resources made available to an agency as well as the status of those resources at a specific point in time. According to the Office of Management and Budget (OMB), the SBR was added as a basic federal financial statement so that the underlying budgetary accounting information could be audited and would, therefore, be more reliable for routine management use and budgetary reporting, including reporting in the President’s Budget. As noted above, one of the objectives of internal control generally, and funds control in particular, is to ensure compliance with applicable law. Executive agencies’ use of federal funds is governed by fiscal statutes that establish specific funds control requirements. Many of these laws have been codified in Title 31 of the United States Code, particularly in chapters 13, 15, and 33. These chapters contain the laws known commonly as the ADA, the Recording statute, the Miscellaneous Receipts statute, the Purpose statute, and the Bona Fide Needs statute, as well as the provisions establishing the procedures and officials responsible for the disbursement of federal funds and the provisions that govern the closing of appropriations accounts known as the Account Closing Statute. Many other government-wide and agency- specific provisions of permanent law govern the use of federal funds, such as the Adequacy of Appropriations Act and limitations on DOD incurring obligations for business system modernization, as do the provisions of annual appropriations acts. Appropriations acts prescribe the purpose, amount, and time for which appropriations are available for obligation and expenditure, and they often include additional permanent and temporary fiscal guidance. The ADA, in particular, is central to Congress’s ability to uphold its constitutional power of the purse, to hold executive branch officials accountable for proper use of budgetary resources, and to ensure proper stewardship and transparency of the use of public funds. As noted above, the act requires heads of federal agencies to establish by regulation a system of administrative controls over obligations and expenditures, commonly referred to as funds control regulations. The act also includes certain prohibitions, such as prohibiting federal officers and employees from authorizing or making obligations or expenditures in excess of the amount available in an apportionment or reapportionment, an appropriation or fund, or an allotment unless authorized by law. Once it is determined that there has been a violation, the ADA requires the agency head to report immediately to the President and Congress all relevant facts and a statement of actions taken, and transmit a copy of the report to the Comptroller General. The ADA also provides for possible administrative and criminal penalties for employees responsible for violations. Our analysis of over 300 audit and financial reports on DOD financial management operations issued over the last 7 years identified over 1,000 funds control weaknesses. We grouped these weaknesses into three major categories: (1) inadequate training, supervision, and management oversight; (2) ineffective transaction controls; and (3) inadequate business systems. In fiscal year 2013, the DOD Comptroller and DOD auditors continued to report funds control weaknesses and ADA violations related to these same areas. For example, in its November 2013 Agency Financial Report, DOD self-reported 16 material weaknesses in financial reporting, noting that it had no assurance of the effectiveness of the related controls. These weaknesses place DOD at risk of making program and operational decisions based on unreliable data and impair DOD’s ability to improve its financial management operations and achieve the department’s audit readiness goals. The long-standing, pervasive nature of funds control weaknesses poses a significant impediment to reliable financial management operations, including proper use of resources and achieving financial audit readiness as well as accountability and stewardship of taxpayer funds entrusted to DOD management. As discussed later in this report, DOD has numerous corrective actions under way to address these weaknesses, including actions related to its audit readiness efforts as well as efforts to address findings in auditor and ADA reports. Our analysis identified long-standing weaknesses across DOD components related to the following three areas: (1) Inadequate training, supervision, and management oversight of budgetary processes and controls. Training assures that personnel have the skills to carry out their assigned duties. Supervision is day-to-day guidance by a supervisor and management oversight involves assuring adequate supervisory guidance and training as well as overall monitoring of the subject matter area. (2) Inadequate transaction controls. These controls cover proper authorization and recording of budgetary transactions, such as obligations and disbursements (outlays); maintaining adequate supporting documentation; and proper and timely reporting of transactions, related summaries, and financial reports. (3) Ineffective business systems. This category refers to business systems that do not have effective controls for recording, supporting, and reporting financial transactions, including budgetary transactions, and therefore, do not provide adequate controls over financial reporting on the results of operations and do not assure compliance with laws, such as the Federal Financial Management Improvement Act of 1996 (FFMIA) and the ADA. Figure 2 summarizes 1,006 reported funds control weaknesses that span DOD budgetary accounting, funds control, and financial reporting by three categories. Examples of findings related to the three major categories of funds control weaknesses follow. Many of the funds control reports identified more than one weakness. See appendix II for additional details on these findings. Training, supervision, and management oversight. In its November 2013 FIAR Plan Status Report, DOD continued to identify unqualified or inexperienced personnel, not only at the working level but also in senior- level positions, as a risk to achieving sound financial management operations and auditability. Standards for Internal Control in the Federal Government states that effective management of an organization’s workforce is essential to achieving positive results and is an important part of internal control. Qualified and continuous supervision should be provided to ensure that internal control objectives are achieved. In addition, deficiencies found during ongoing monitoring should be communicated to the individual responsible and to at least one level of management above that individual. Serious matters should be reported to top management. To help address skill needs, the NDAA for Fiscal Year 2012 authorized the DOD Comptroller to develop a financial management training and certification program. FIAR Plan Status Reports continue to identify the need for knowledgeable and qualified personnel as critical to achieving DOD’s financial improvement and audit readiness goals. DOD’s November 2013 FIAR Plan Status Report also identified a risk associated with identified control weaknesses not being corrected. DOD reported that the DOD Comptroller was formalizing a process and establishing a tracking system to closely monitor actions on independent auditor findings and recommendations resulting from financial audit readiness testing. Federal internal control standards also state that internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. The following examples illustrate long-standing and persistent funds control weaknesses in the area of training, supervision, and management oversight that we and others have identified in the past. As discussed later in this report, DOD has corrective actions under way through audit readiness efforts under the FIAR Plan and through other efforts related to improving training. In 2012, we reported that the training on the Army’s General Fund Enterprise Business System (GFEBS) and the Air Force’s Defense Enterprise Accounting and Management System (DEAMS) focused on a systems overview and how the systems were supposed to operate. While this was beneficial in identifying how GFEBS and DEAMS were different from the existing legacy systems, the training focused too much on concepts rather than the skills needed for users to perform their day-to-day operations. Without personnel with adequate skill sets and training to properly use the new systems, DOD staff members will not be positioned to control the use of funds for which they are responsible. We made five recommendations to the Secretary of Defense to ensure the correction of system problems prior to further system deployment, including user training. DOD concurred with four and partially concurred with one of the recommendations and described its efforts to address them. With regard to the partial concurrence, DOD stated that based on the nature of an identified system deficiency, it will determine whether to defer system implementation until it is corrected. In March 2010, Navy auditors reported that the Navy provided insufficient supervision of personnel responsible for documenting receipt and acceptance of goods and services and for processing invoices. As a result, Navy commands had incomplete documentation with which to validate the accuracy of 53 invoices valued at $231,555 for 19 contracts at three Navy Fleet Readiness Centers. Without effective supervision to ensure that sufficient documentation exists prior to payment, the reliability of Navy invoice payments (disbursements) is at risk. In addition, not properly managing the invoice process increases the risk of improper or fraudulent payments. In June 2010, Air Force auditors reported on their audit of the Air Force’s tri-annual review process, stating that the service experienced significant recurring difficulties in identifying and deobligating unneeded obligation amounts. From fiscal years 2006 through 2010, Air Force audit reports identified a 42-percent error rate for periodic review to monitor and adjust obligations as required by DOD policy. The DOD Financial Management Regulation (FMR) requires component organizations to perform tri-annual reviews to monitor commitment and obligation transactions for timeliness, accuracy, and completeness. Specifically, reviewers are required to review and determine the validity of commitments and obligations that have had no activity (expenditures or adjustments) for 120 days and deobligate any unneeded amounts of unliquidated obligations. Financial managers are required to implement effective internal controls to ensure timely completion of tri-annual reviews and any identified corrective actions. However, Air Force Audit Agency reviews have reported that the Tri-annual Review Program did not identify millions of dollars of unsupported obligations that could have been used for other mission requirements. In July 2010, the DOD IG reported that the Army mismanaged $110 million of its Defense Emergency Response Fund (DERF) funding. According to the DOD IG report, when the Army spent its DERF emergency supplemental appropriations for items such as unrelated building repairs, furniture, and spare parts, instead of DOD needs arising from the terrorist attacks of September 11, 2001, it potentially violated the Purpose Statute (appropriations are applied to the objects for which they were made) and the Bona Fide Needs Rule (appropriations may only be used for needs arising within the period of their availability). The DOD IG report stated that the potential ADA violations occurred because the Army commands involved did not follow DOD policies and procedures to ensure that appropriated funds were expended for their intended purposes. The DOD IG made recommendations directed at improving related training and monitoring activities to ensure compliance with legal requirements and DOD policies. Transaction controls. Auditors have reported on DOD’s inability to provide effective funds control and report reliable financial information, including budgetary information for many years. DOD’s challenges in properly recording and adequately supporting its obligations and disbursements have impaired its ability to track and control the use of public funds. Standards for Internal Control in the Federal Government states that transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. All transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. Federal internal control standards further state that key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. According to DOD’s FMR, obligations and expenditures are required to be recorded accurately and promptly, even if the recording results in a negative amount at the appropriation, fund, or other accounting level. However, we, the DOD IG, and military department auditors have reported long-standing weaknesses in DOD’s controls for proper authorization and recording, adequate support, and accurate reporting of budgetary transactions. In addition, we and the DOD IG have reported that DOD has not been able to locate support for transactions in order to satisfy audit requirements. The following examples demonstrate the persistent nature of DOD’s funds control weaknesses related to ineffective transaction controls that we and others have identified in the past. As discussed later in this report, DOD has corrective actions under way to address its transaction control weaknesses, particularly in regard to its audit readiness efforts. In March 2012, we reported that the Army was unable to locate supporting documentation for a sample of 250 active duty military payroll accounts. In March 2011, we worked with Army Human Resources Command, Army Finance Command, and Defense Finance and Accounting Service (DFAS) Indianapolis officials to obtain source documents that supported basic pay, allowances (such as allowances for housing costs), and entitlements (special pay related to type of duty, such as hazardous duty pay). After various offices were unable to provide supporting documentation, we suggested that the Army focus on the first 20 pay account sample items. When the Army continued to have difficulty locating supporting documentation, we suggested that the Army focus on the first 5 sample items. As of the end of September 2011, 6 months after receiving our initial sample request, the Army and DFAS were able to provide complete documentation for 2 of our 250 sample items, partial support for 3 sample items, and no support for the remaining 245 sample items. We recommended that the Secretary of the Army require personnel and pay-related documents that support military payroll transactions to be centrally located, retained in each service member’s Official Military Personnel File, or otherwise readily accessible; and that the Army’s Human Resources Command periodically review and confirm that service member Official Military Personnel File records are consistent and complete to support annual financial audit requirements. The Army concurred with our recommendations and is taking actions to address them. In December 2011, Navy auditors reported that Marine Corps authorizing officials approved travel expense disbursements that were not supported by receipts or were not allowable under the related guidance because Defense Travel System travel administrator functions (1) allowed complete access to the system by preparers and reviewers of vouchers and (2) were not separated from travel voucher review and approval functions, as required by DOD guidance. As a result, employees received reimbursement for unsupported travel expenses of $15,208 and unallowable expenses of $3,385. In January 2010, Army auditors reported significant weaknesses in transaction controls over disbursing functions at Multi-National Division-South. Army auditors reported that 16 out of 62 voucher packages reviewed (26 percent) contained missing or inaccurate documentation, such as a certified voucher or invoice, contract or deliverables information for the first and the final payment, and the name of the government official documenting receipt and acceptance of goods and services. Army auditors also found inaccurate reporting of dollar values, differences in payee names on the voucher and the related supporting documentation, and inaccurate paying office and mailing address information. Army auditors concluded that the Army did not have (1) adequate controls over its vendor payments to ensure that payment authorizations and disbursements were accurate and that funds were obligated and available before disbursement or (2) reasonable assurance that vendor payments valued at over $1.5 million were valid or fully supported. Further, Army auditors reported that insufficient controls in high-risk areas leave the Army vulnerable to fraud, waste, and misuse and could result in duplicate payments, disbursements that are not matched to obligations, disbursements that exceed recorded obligations, or possible ADA violations. Ineffective business systems. DOD spends billions of dollars each year to acquire modern systems that it considers fundamental to achieving its business transformation goals. In February 2013, we reported that while DOD’s capability and performance relative to business systems modernization has improved, significant challenges remain. The department has not fully defined and established business systems modernization management controls, which are vital to ensuring that it can effectively and efficiently manage an undertaking with the size, complexity, and significance of its business systems modernization and minimize the associated risks. We designated this area as high risk in 1995 and since then have made about 250 recommendations aimed at strengthening DOD’s institutional approach to modernization and reducing the risk associated with key investments. While DOD has made progress toward implementing key institutional modernization management controls in response to statutory provisions and our recommendations, progress has been slow and DOD has been limited in its ability to demonstrate results. Further, we, the DOD IG, and military department auditors identified business system design and development weaknesses affecting funds control, such as noncompliance with DOD’s Standard Financial Information Structure (SFIS), Business System Architecture, and the U.S. Standard General Ledger (USSGL). The following examples illustrate the scope of DOD’s long-standing problems in the area based on prior GAO, DOD, and DOD IG findings. DOD is working to modernize its financial management systems and related business processes, as discussed later. Army Logistics Modernization Program (LMP). In April 2010, we reported that the Army’s LMP, which is intended to replace aging Army systems used to manage inventory and depot repair operations and support financial management and reporting, would require at least two additional deployments because problems with data quality, training, and metrics to measure success of LMP implementation had not been resolved. We recommended that the Army (1) improve testing activities to obtain reasonable assurance that the data used by LMP can support the LMP processes, (2) improve training for LMP users, and (3) establish performance metrics to enable the Army to assess whether the deployment sites are able to use LMP as intended. The Army concurred with our recommendations and noted actions under way to address them. In November 2010, we reported that the Army had implemented data audits and new testing activities to improve data accuracy, but data issues that could impede LMP functionality persisted. For example, we reported that it was unclear whether (1) the system would provide all the software functionality needed to conduct operations, (2) data maintained in the system were sufficiently accurate, and (3) the Army would achieve all the expected benefits from its investment in the system. We recommended that within 90 days of the beginning of its third deployment, the Army periodically report to Congress on the progress of LMP, including its progress in ensuring that the data used in LMP can support the system, timelines for the delivery of software necessary to achieve full benefits, and the costs and time frames of its mitigation strategies. DOD concurred with our recommendation stating that the Army would comply with the reporting timetable and conditions in our recommendation. While DOD concurred with our recommendation, as of our last update in November 2013, it had not yet provided any such reports to Congress. In May 2012, the DOD IG reported that after spending about $1.8 billion, Army managers had not accomplished the reengineering needed to integrate the LMP procure-to-pay functions to comply with DOD Business Enterprise Architecture requirements and correct material weaknesses. According to DOD IG auditors, as of August 31, 2011, LMP activities reported more than $10.6 billion in abnormal obligated balances. In addition, the DOD IG reported that (1) LMP did not record the actual invoice numbers from the vendors, (2) LMP incorrectly recorded the interface date with the pay entitlement system instead of the dates for invoice receipt and receipt of goods and services, and (3) its invoice and receiving report transaction screens did not identify the corresponding disbursement voucher information. Because more than one disbursement generally liquidated an obligation, LMP needed to link the various invoices and receiving reports to the corresponding disbursement vouchers. The absence of actual invoice numbers, accurate dates, and disbursement voucher information prevented Army activities from using LMP to detect duplicate payments and validate that payments complied with the Prompt Payment Act. According to DOD’s November 2012 FIAR Plan Status Report, the Army’s abnormal obligated balances decreased during fiscal year 2012, but disbursements that could not be matched to a recorded obligation increased. Navy ERP. In February 2012, the DOD IG reported that the Navy approved deployment of its ERP general ledger system without ensuring that it complied with DOD’s SFIS and the USSGL. As a result, the Navy spent $870 million to develop and implement a system that may not produce accurate and reliable financial information. The DOD IG reported that this is a significant weakness because when fully deployed, the system is intended to manage 54 percent of the Navy’s total obligational authority, which was $155.9 billion for fiscal year 2013. Air Force DEAMS. In September 2012, the DOD IG reported that the Air Force’s DEAMS lacked critical functional capabilities needed to generate accurate and reliable financial management information. According to the DOD IG, this weakness occurred because DEAMS managers did not maintain an adequate general ledger chart of accounts, and because DOD and Air Force management initially decided not to report financial data directly to the Defense Departmental Reporting System (DDRS) for financial reporting purposes until the fourth quarter of fiscal year 2016. Further, we recently reported that the Air Force did not meet best practices in developing a schedule for the DEAMS program. This raises questions about the credibility of the deadline for acquiring and implementing DEAMS to provide needed functionality for financial improvement and audit readiness. We recommended that the Air Force update the cost estimate as necessary after implementing our prior recommendation to adopt scheduling best practices. DOD concurred with our recommendation. Fundamental weaknesses in DOD funds control, including related business systems weaknesses, significantly impair DOD’s ability to ensure the (1) proper use of resources, (2) reliability of reports on the results of operations, and (3) success of its financial audit readiness efforts. For example, billions of dollars in DOD-reported improper payments and continuing reports of millions of dollars in ADA violations underscore DOD’s inability to assure proper use and accountability of resources provided to carry out its mission and operations. Further, DOD’s transaction control weaknesses, including unsupported adjustments (plugs) to reconcile DOD fund balances with the Department of the Treasury’s (Treasury) records, and suspense transactions that cannot be identified to a fund account impair accurate accounting for programs and results of operations. As a result, quarterly and annual financial statements, reports on budget execution, and reports on the results of operations, which could have a material effect on budget, spending, and other management decisions as well as determinations of agency compliance with laws and regulations, are unreliable. Military auditors also reported that these weaknesses leave their departments at risk of fraud and improper transactions. Additionally, funds control weaknesses continue to hinder DOD’s ability to achieve its September 2014 SBR audit readiness goal and raise questions about DOD’s ability to achieve audit readiness on a full set of financial statements by the end of fiscal year 2017. In response to component difficulties in preparing for a full SBR audit, the November 2012 FIAR Plan Status Report and the March 2013 FIAR Guidance included a revision to narrow the scope of initial audits to only current- year budget activity and expenditures on a Schedule of Budgetary Activity. Under this approach, beginning in fiscal year 2015, reporting entities are to undergo an examination of their Schedules of Budgetary Activity reflecting the amount of SBR balances and associated activity related only to funding approved on or after October 1, 2014. As a result, the Schedules of Budgetary Activity will exclude unobligated and unexpended amounts carried over from prior years’ funding as well as information on the status and use of such funding in subsequent years (e.g., obligations incurred, outlays). These amounts will remain unaudited. Over the ensuing years, as the unaudited portion of SBR balances and activity related to this funding decline, the audited portion is expected to increase. However, the NDAA for Fiscal Year 2010, as amended by the NDAA for Fiscal Year 2013, requires that the FIAR Plan describe specific actions to be taken and the costs associated with ensuring that DOD’s SBR is validated as ready for audit by not later than September 30, 2014. Because the audit of the Schedule of Budgetary Activity is an incremental step building toward an audit-ready SBR, the FIAR Plan does not presently comply with this requirement. Furthermore, all material amounts reported on the SBR will need to be auditable in order to achieve the mandated goal of full financial statement audit readiness by September 30, 2017. It is not clear how this can be accomplished if activity related to funding provided prior to October 1, 2014, remains unaudited. DOD does not know the accuracy or validity of billions of dollars it spends annually to carry out its mission and operations. For example, our work and the work of the DOD IG has shown that DOD does not have a valid methodology for estimating its annual improper payments, and it does not have assurance that its obligations and expenditures comply with applicable law, including the ADA. As a result, management decisions are being made using incomplete and unreliable data. The following examples from our past work and the past work of others illustrate the effect of these weaknesses. We and the DOD IG have reported weaknesses in DOD’s payment controls, including weaknesses in its process for assessing the risk of improper payments and reporting estimates of the magnitude of improper payments. In September 2011, we testified that in its Agency Financial Report for fiscal year 2010, DOD reported that it made an estimated $1 billion in improper payments under five of its programs. However, this estimate was incomplete because DOD did not include estimates from its commercial payment programs, which account for approximately one-third of the value of DOD-reported payments. In May 2013, we reported that DOD’s fiscal year 2011 Agency Financial Report included commercial payment programs in its improper payment estimates, totaling over $1.1 billion. However, we found that DOD’s improper payment estimates were neither reliable nor statistically valid. We also found that DOD did not conduct a risk assessment for fiscal year 2011 in accordance with the requirements of the Improper Payments Elimination and Recovery Act of 2010 (IPERA). Further, although DOD had a corrective action plan for fiscal year 2011 to address problems with the reliability of its improper payment estimates, the plan did not include the required risk assessment. We concluded that DOD’s lack of a risk assessment made it difficult for the department to fully identify underlying reasons or root causes of improper payments in order to develop a comprehensive, effective corrective action plan. Additionally, DOD did not conduct recovery audits nor did it determine that such audits would not be cost effective, as required by IPERA. Finally, the department did not have procedures to ensure that improper payment and recovery audit reporting in its fiscal year 2011 Agency Financial Report was complete, accurate, and compliant. DOD has taken some actions to reduce improper payments, such as reporting a statistical estimate for DFAS commercial payments and issuing revised FMR guidance on improper payments and recovery audits. Further, in addendum A to DOD’s Fiscal Year 2013 Agency Financial Report, the DOD IG reported that the department had taken many corrective actions to improve identification of its improper payments; however, more work is needed to improve controls over payments processed throughout the department. For example, the DOD IG reported that improper payments are often the result of unreliable data, a lack of adequate internal controls, or both, which increases the likelihood of fraud. As a result, DOD continues to lack assurance that billions of dollars of annual payments are disbursed correctly. The DOD IG also reported that the department’s inadequate financial systems and controls hamper its ability to make proper payments, and that the pace of operations and volume of department spending create additional risk of improper payments. These challenges have hindered the department’s ability to detect and recover improper payments. As stated in our May 2013 report, until the department takes action to correct the deficiencies in underlying transaction controls and deficiencies we have found in the past related to identifying, estimating, reducing, recovering, and reporting improper payments and thereby fulfills legislative requirements and implements related guidance, DOD remains at risk of continuing to make improper payments and wasting taxpayer funds. We made 10 recommendations to improve DOD’s processes to identify, estimate, reduce, recover, and report on improper payments. DOD concurred with 9 and partially concurred with 1 of the recommendations and described its plans to address them. Continuing reports of ADA violations underscore DOD’s inability to assure that obligations and expenditures are properly recorded and do not exceed statutory levels of control. The ADA requires, among other things, that no officer or employee of DOD incur obligations or make expenditures in excess of the amounts made available by appropriation, by apportionment, or by further subdivision according to the agency’s funds control regulations. According to copies of ADA violation reports sent to GAO, DOD reported 75 ADA violations from fiscal year 2007 through fiscal year 2012, totaling nearly $1.1 billion. We received reports of two additional ADA violations in 2013 totaling $148.6 million. However, the number of violations and dollar amounts reported may not be complete because of weaknesses in DOD’s funds control and monitoring processes that may not have allowed all violations to be identified or reported. For example, DOD IG reports issued in fiscal years 2007 through 2012 (see fig. 3) identified $5.5 billion in potential ADA violations that required further investigation to determine whether an ADA violation had, in fact, occurred, or if adjustments could be made to avoid a violation. Further, while DOD’s FMR limits the time from identification to reporting of ADA violations to 15 months, our analysis identified several examples of time spans for investigations of potential ADA violations taking several additional months to several years before determinations of actual violations were reported. For example, as of September 30, 2013, 3 of the DOD IG-reported potential violations totaling $713.1 million could not be fully corrected and have resulted in $108.8 million in actual, reported ADA violations. To the extent that ADA violations are not identified, corrected, and reported, DOD management decisions are being made based on incomplete and unreliable data. Our analysis of DOD’s reports of ADA violations determined that the increases in violations reported in fiscal years 2011 and 2012 relate primarily to the military services’ use of “bulk” (estimated) amounts used to record obligation transactions for permanent change-of-station moves and significant increases in bonuses paid from their respective Military Personnel appropriations. These violations are specific to the ADA with regard to prohibiting federal officers and employees from authorizing or making obligations or expenditures in excess of available amounts. The use of estimated obligations requires periodic monitoring and reconciliation of estimated obligations to the related disbursement transactions and the recording of appropriate adjustments to the estimated obligations based on actual disbursement amounts; however, these ADA violations occurred largely because the military services did not have adequate procedures for monitoring and reconciling disbursements to bulk obligations. During fiscal year 2011, the Navy and the Air Force reported violations related to permanent change-of-station moves totaling $183 million and $87.5 million, respectively. In fiscal year 2012, after an extended investigation, the Army reported a related violation of $155 million. Additionally, the reason for the Army’s large spike in fiscal year 2011 violations related primarily to $100.2 million in transportation services that were also recorded using bulk obligations. Our recent coordination with the DOD IG and the military departments on the status of corrective actions indicated that the military departments continue to be at risk of ADA violations related to using estimated bulk obligations because they have not yet corrected process weaknesses that prevent them from recording transaction-level obligations for these activities and estimating methodologies are not automatically adjusted for changes in fuel costs and increases in other costs, such as insurance. DOD has stated that its major financial decisions are based on budgetary data (e.g., the status of funds received, obligated, and expended). The department’s ability to improve its budgetary accounting has historically been hindered by its reliance on fundamentally flawed financial management systems and processes and transaction control weaknesses. In its November 2013 Agency Financial Report, DOD self- reported 16 material weaknesses in financial reporting, noting that it has no assurance of the effectiveness of the related controls. These weaknesses affect reporting on budgetary transactions and balances, including budget authority, fund balance, outlays, and categories of transactions, such as civilian pay, military pay, and contract payments, among other areas. As a result, reports on budget execution and reports on the results of operations that could have a material effect on budget, spending, and other management decisions are unreliable. The following examples illustrate the effect of transaction control and system weaknesses on DOD’s operational and budgetary reporting. DOD continues to make billions of dollars of unsupported, forced adjustments, or “plugs,” to reconcile its fund balances with Treasury’s records. In the federal government, an agency’s Fund Balance with Treasury (FBWT) accounts are similar in concept to corporate bank accounts. The difference is that instead of a cash balance, FBWT represents unexpended spending authority in appropriation accounts. Similar to bank accounts, the funds in DOD’s appropriation accounts must be reduced or increased as the department spends money or receives collections that it is authorized to retain for its own use. For fiscal year 2012, DOD agencies reported making $9.2 billion in unsupported reconciling adjustments to agree their fund balances with Treasury’s records. As shown in table 1, DOD’s unsupported reconciling adjustments to agree its fund balances to Treasury records grew to $9.6 billion in fiscal year 2013. Over the years, DOD has recorded billions of dollars of disbursement and collection transactions in suspense accounts because the proper appropriation accounts could not be identified and charged, generally because of a coding error. Accordingly, Treasury does not accept DOD reporting of suspense transactions, and suspense transactions are not included in DOD component FBWT reconciliations. It is important that DOD accurately and promptly charge transactions to appropriation accounts since these accounts provide the department with legal authority to incur and pay obligations for goods and services. Table 2 shows DOD-reported suspense balances for fiscal years 2010 through 2012. Reported suspense account balances could be understated because of DOD’s process for complying with Treasury’s rule for clearing suspense amounts within 60 days. For example, as we previously reported, during our audit of the Navy and Marine Corps FBWT reconciliation processes, we observed the transfer of unresolved suspense disbursement transactions to canceled accounts and the transfer of unresolved collection transactions to Miscellaneous Receipts of the Treasury without supporting documentation to show the adjustments were proper. We also identified subsequent accounting entries that moved these transactions back to the suspense accounts. When we asked DFAS personnel about this pattern, they explained that they transfer transactions from suspense to these accounts to comply with Treasury’s 60-day rule for clearing (i.e., resolving) them. DFAS personnel told us they later transfer the transactions back into suspense to restart the clock on the 60-day period for resolving them. Consequently, it is not possible to determine the number and significance of these unsupported transactions, and DOD does not know the balance of budget authority that is truly available in its appropriation and fund accounts. Further, this practice runs contrary to Treasury’s requirements, which is to fully resolve these transactions within a reasonable time frame. Funds control weaknesses continue to hinder DOD’s ability to achieve successful audits of its financial statements and raise questions about its ability to achieve the department’s goals of validating SBR audit readiness by the end of fiscal year 2014 and undergoing an audit on a full set of financial statements for fiscal year 2018. The DOD Comptroller represented to the DOD IG that DOD’s Fiscal Year 2012 and Fiscal Year 2013 Consolidated Financial Statements did not substantially conform to U.S. generally accepted accounting principles (GAAP) and that DOD financial management and business systems that report financial data were unable to adequately support material amounts on the financial statements as of September 30, 2012, and September 30, 2013. Accordingly, the DOD IG reported disclaimers of opinion in its efforts to audit DOD’s consolidated financial statements. DOD’s FIAR Plan Status Reports continue to identify unqualified or inexperienced personnel and information system control weaknesses as significant risks to audit readiness. In addition, military department and DOD service-provider efforts have not yet resolved continuing transaction control weaknesses related to proper recording, adequate supporting documentation, and accurate and timely reporting in order to correct material weaknesses that impede DOD’s audit readiness efforts. As a result, DOD has not yet asserted audit readiness for most military department SBR assessable units. The following examples illustrate the effect of additional funds control and system weaknesses on DOD’s financial audit readiness efforts that we and others have identified in our previous work. In December 2012, we reported on the status of DOD efforts to address audit backlogs needed to close certain aging contracts and ensure that DOD deobligates and uses unspent funds before they are canceled. Contract closeout backlogs also can contribute to overstatements in reported contract obligations because of the lack of support for obligated amounts that should have been, but were not, deobligated. For fiscal years 2007 through 2011, DOD reported obligations of more than $1.8 trillion on contracts for acquiring goods and services needed to support its mission. As of the end of fiscal year 2011, DOD reported it had a large backlog of contracts— numbering in the hundreds of thousands—that have not been closed within the time frames required by federal regulations. The Defense Contract Audit Agency (DCAA) is addressing DOD’s contract closeout backlog through an initiative to reduce the backlog of incurred cost audits, which will ultimately allow the Defense Contract Management Agency (DCMA) and others to make final adjustments to obligated balances on completed contracts and close the contracts. In addition, while DCMA is attempting to accelerate efforts to close contracts that are physically complete, the success of its efforts depends on DCAA’s ability to complete annual incurred cost audits in a timely manner and the reliability of information on contract statuses. We also reported that at the local level, seven out of the nine contracting offices we spoke with collected some information about their overage contracts, such as the total number of contracts in the backlog and the type of contracts, but the offices generally were unable to provide us with detailed information as to where the contracts were in the closeout process, such as the number awaiting a DCAA incurred cost audit. We recommended that DCAA develop a plan to assess its incurred cost audit initiative and that DCMA improve data on over-age contracts. We also recommended that the military departments develop contract closeout data and establish performance measures. DOD concurred with the recommendations and identified ongoing and planned actions to address them. In a series of reports, the DOD IG reported that DOD managers did not take the steps needed to ensure that four component ERPs (GFEBS, LMP, Navy ERP, and DEAMS) had the capability to record and track transaction data. Instead of recording transactions in the ERPs, such as budget authority, obligations, collections, and disbursements (at the time of the related events), DOD managers relied on DFAS to record journal vouchers (adjusting entries) in DDRS and used other offline electronic processes, such as spreadsheets, to record accounting entries in the four ERPs. According to the DOD IG, because most funds control accounting is not being managed in the accounting and business information systems, DOD continues to build its budget execution reports and SBRs using budgetary status data that cannot be traced to actual transaction data within any official accounting system. This weakness impairs the reliability of DOD’s budgetary reports, including periodic reports to Congress. In addition, the DOD IG reported that the lack of effective oversight of the development and implementation of system access templates left LMP data at risk of unauthorized and fraudulent use. The DOD IG also reported the lack of support for feeder system transactions imported in Defense Departmental Reporting System-Budgetary that if not corrected, will hinder DOD’s audit readiness efforts. The DOD IG is continuing to monitor DOD’s actions to resolve these system weaknesses. DOD is addressing corrective actions on funds control weaknesses through audit readiness efforts under the FIAR Plan and related FIAR Guidance, through other efforts mainly related to improving training and business system controls, and through efforts to address findings in auditor and ADA reports. Many of these actions have not been fully implemented so their effectiveness is yet to be determined. Because several critical DOD-wide corrective actions to improve financial management and address open audit recommendations are targeted for completion in 2017, funds control issues are likely to persist during this time. The following discussion highlights DOD actions under way to address funds control weaknesses related to (1) training, supervision, and management oversight; (2) transaction controls; and (3) business systems. Training, supervision, and management oversight. A key principle for effective workforce planning is that an agency needs to define the critical skills and competencies that it will require in the future to meet its strategic program goals. Once an agency has identified critical skills and competencies, it can develop strategies to address gaps in the number of personnel, needed skills and competencies, and deployment of the workforce. FIAR Plan Status Reports continue to identify the need for knowledgeable and qualified personnel as critical to achieving DOD’s financial improvement and audit readiness goals. Currently, FIAR training, which focuses on audit readiness efforts, and military department financial management training are not tied to mission-critical financial management competencies, staff experience and proficiency levels, or identified skill gaps. DOD is addressing financial management workforce competencies and training through complementary efforts by (1) the Office of the Under Secretary of Defense for Personnel and Readiness (DOD Personnel and Readiness) to develop a strategic civilian workforce plan that includes financial management, pursuant to requirements in the NDAA for Fiscal Year 2010, as amended, and (2) the DOD Comptroller to develop and implement a Financial Management Certification Program, pursuant to requirements in the NDAA for Fiscal Year 2012. Financial management personnel are expected to possess the competencies that are relevant to and needed for their assigned positions. These competencies include fundamentals of accounting, accounting analysis, budget execution, financial reporting, and audit planning and management, among others. DOD Personnel and Readiness is currently working on a department- wide competency assessment tool that will be used by the department, including the financial management functional community, to capture information related to competencies, such as proficiency level, importance, and criticality, and to identify any gaps in support of the Comptroller’s financial management certification program. For example, as of March 2012, the Office of Personnel and Readiness, Strategic Human Capital Planning Program Office, had identified 32 mission-critical occupations, including four financial management occupations: accounting, auditing, budget analysis, and financial administration. The Program Office currently assesses skills or staffing gaps, which relate to unfilled positions by occupation, by analyzing the differences between the number of positions by occupational series that DOD was authorized to fill and the number of occupational positions that are currently filled. DOD is working toward completing its gap assessments by 2015. In support of the new certification program, the DOD Comptroller has identified 23 mission-critical financial management competencies and five levels of proficiency for each competency. The new Financial Manager Certification Program will require training in three areas: (1) selected financial management competencies at the proficiency level commensurate with the required certification level for that position; (2) leadership competencies as defined by the DOD Leadership Continuum; and (3) other technical training in key areas, such as how to respond to an audit, fiscal law, and ethics. Training on fiscal law would cover funds control and ADA requirements. The certification program includes three levels of certification covering staff, supervisors, and management, with requirements for initial hours of training, continuing-education, and experience. (See table 3.) While DOD’s Certification Program requirements do not specify that the recommendations for attaining bachelor’s and master’s degrees are specific to degrees in financial management-related areas, the Certification Program is designed to accept college courses in financial management as fulfilling certain of the certification program course requirements. If effectively implemented, these three levels of certification could help address training, supervision, and management oversight weaknesses. Employees will have 2 years to complete courses, training, and professional development requirements for the certification level required for their assigned positions. DOD’s Financial Manager Certification Program received National Association of State Boards of Accounting (NASBA) approval in March 2013. The DOD Comptroller initiated the pilot for the certification program in July 2012, which was completed at the end of March 2013. Phased implementation began in June 2013, and the current target date for full implementation is the end of fiscal year 2014. The Certification Program is to be mandatory for DOD’s approximately 54,000 civilian and military financial management personnel. Effective implementation of the certification program is critical to ensure that financial management personnel obtain the needed skills to make effective improvements in financial management, including improved funds control and audit readiness as well as appropriate supervision and management oversight. Transaction controls. DOD officials are addressing transaction control weaknesses under the department’s FIAR Plan efforts. In December 2011, the DOD Comptroller updated the FIAR Guidance to identify key SBR-related transaction control objectives within military department assessable units related to the funds control and budget execution processes. These key transaction controls include proper authorization and recording, adequate supporting documentation, and accurate reporting of obligation and disbursement transactions. DOD components are testing these key transaction controls for each assessable unit, including civilian pay, military pay, contract pay, and net outlays as reflected in FBWT, as a basis for determining whether they can achieve audit readiness for the SBR by September 2014. Once the military departments deem their transaction support and other controls to be effective for a particular assessable unit of the SBR, they will assert audit readiness for that assessable unit and request an audit by an independent public accountant to validate their audit readiness assertion. Figure 4 shows the SBR audit readiness milestone dates for the military departments and other defense agencies as of the November 2013 FIAR Plan Status Report. While DOD has made progress toward financial audit readiness, milestone dates for the Navy have slipped and SBR milestone dates for the Army and the defense agencies have been compressed, making it questionable that corrective actions for these DOD components will be completed by September 2014 for all assessable units. Further, the Air Force has revised its milestone dates for achieving SBR audit readiness to the third quarter of fiscal year 2015. With a reported $187.8 billion in fiscal year 2013 General Fund budgetary resources, the Air Force is material to DOD’s SBR and if the Air Force cannot meet DOD’s September 2014 SBR audit readiness goal, DOD will not be able to meet this goal. This, in turn, raises concerns about DOD’s ability to undergo an audit on a full set of financial statements for fiscal year 2018. DOD uses service providers to improve efficiency and standardize business operations in various functional areas, including accounting, personnel and payroll, logistics, contracting, and system operations and hosting support. DOD service providers and their business systems are fundamental to reliable accounting and reporting and financial audit readiness. The FIAR Guidance requires DOD service providers, such as DFAS, the Defense Logistics Agency, the Defense Information Systems Agency, and DCMA, to obtain an examination of their operating controls, including system controls, under Statements on Standards for Attestation Engagements (SSAE) No.16, when their controls are likely to be relevant to reporting entities’ internal controls over financial reporting. DOD service providers plan to use the results of their SSAE No. 16 examinations as a basis for improving their operating processes and controls. Figure 5 shows the service providers and the operating systems supporting business functions that are targeted for SSAE No. 16 examinations under FIAR and the related milestone dates for audit readiness assertions and their validation. In August 2013, we reported that DOD did not have an effective process for identifying audit-readiness risks, including risks associated with its reliance on service providers for much of its components’ financial data, and it needed better department-wide documentation retention policies. Effective service-provider controls are critical to ensuring improvements in DOD funds control. With regard to corrective actions related to ADA violations associated with military personnel appropriations, significant process improvements related to proper recording of transactions are needed and according to the military departments, the time frame for completing corrective actions depends on implementation of their respective integrated personnel and payroll systems, referred to by DOD as IPPS, for military payroll—in late 2016 and 2017. Business systems. In February 2012, we reported that DOD, in an attempt to modernize and develop an effective standardized financial management process throughout the department, had initiated various efforts to implement new ERP financial management systems and associated business processes. We further reported that based upon data provided by DOD, 6 of the 10 ERPs DOD had identified as critical to transforming its business operations had experienced schedule delays ranging from 2 to 12 years, and 5 had incurred cost increases totaling an estimated $6.9 billion. In its Summary of Challenges discussed in addendum A to DOD’s fiscal year 2013 Agency Financial Report, the DOD IG reported that timely and effective implementation of the ERPs is critical for DOD to achieve its financial improvement efforts and audit readiness goals. However, the DOD IG reported that not all of the ERP systems will be implemented by the department’s September 2014 goal for validating audit readiness for the SBR or its goal for undergoing an audit on a full set of financial statements for fiscal year 2018. Moreover, without fully deployed ERPs, the department will be challenged to produce reliable financial data and auditable financial statements without resorting to extreme efforts, such as data calls or manual workarounds, to provide financial data on a recurring basis. The DOD IG also reported that the department has not reengineered its business processes to the extent necessary; instead, it has often customized commercial ERPs to accommodate existing processes, creating the need for system interfaces and weakening controls built into the ERP systems. In the March 2013 FIAR Guidance, DOD reported that material internal control weaknesses were classified in DOD’s Agency Financial Report by the financial statement line item or type of activity affected by the weakness. DOD’s fiscal year 2013 Agency Financial Report lists 16 material weaknesses over financial reporting and relates these weaknesses to ERP systems and end-to-end business processes. DOD reported one overall material weakness related to financial systems. Together, these 17 material weaknesses impact military pay, civilian pay, FBWT, contracts, and military supply requisitions. We and the DOD IG have reported that DOD component ERPs lack the functionality needed to support reliable financial reporting, including accurate and complete USSGL and DOD-wide SFIS information and data requirements and the ability to record budgetary data at the transaction level. DOD has stated that several of the department’s ERPs have been or will be implemented to support the 2018 financial statement audit goal. However, in its summary of management and performance challenges included in DOD’s fiscal year 2012 Agency Financial Report, the DOD IG stated that because of schedule delays ranging up to 13 years, DOD will continue using outdated legacy systems and poorly developed and implemented ERP systems, increasing the risks that (1) the SBR will not be audit ready by September 30, 2014, and (2) DOD may not be able to produce reliable financial data and auditable financial statements without resorting to “heroic efforts, such as data calls and manual workarounds.” In DOD’s fiscal year 2013 Agency Financial Report, the DOD IG reiterated this concern and noted that the department has not reengineered its business processes to the extent necessary, stating that instead it has often customized commercial ERPs to accommodate existing processes. In addition, the FIAR Plan requires the military departments and DOD service providers to use GAO’s Federal Information System Controls Audit Manual (FISCAM) reviews to test business system general and application controls for material systems, including general ledger accounting systems and selected feeder systems, as part of their audit readiness efforts. FISCAM provides a methodology for performing information system control audits of federal and other governmental entities in accordance with professional standards. FISCAM focuses on evaluating the effectiveness of general and application controls. The November 2013 FIAR Guidance provides a detailed description of DOD’s audit readiness requirements related to financial system controls. The Guidance states that the FIAR Directorate has identified the FISCAM control activities and techniques needed to address the key internal controls over financial management reporting risk and includes a link to them. The Guidance further states that DOD reporting entities have ultimate responsibility for information technology controls for those systems through which their transactions flow and will need to communicate and coordinate audit readiness efforts with service providers. The shared understanding between the reporting entity and the service provider is required to be documented in a service-level agreement or memorandum of understanding. According to DOD’s November 2013 FIAR Plan Status Report, the military departments plan to complete their FISCAM general and application control tests as follows: The Army plans to achieve relevant FISCAM general and application- level control objectives for material systems supporting its SBR assessable units, including GFEBS, by June 2014. The Navy planned to complete relevant FISCAM general and application control objectives for Navy ERP in February 2014 and plans to achieve a relevant review or perform a self-assessment of material legacy systems and selected feeder systems by September 2014. The Air Force plans to achieve relevant FISCAM general and application control objectives for material systems supporting its SBR between November 2012 and June 2014. Air Force audit readiness efforts will rely on manual controls and legacy system improvements because its ERP general ledger system—DEAMS—will not be fully deployed until sometime after 2017. DOD has committed significant resources to improving funds controls for achieving sound financial management operations and audit readiness. While DOD expects that these improvements, once realized, will also strengthen the department’s controls in support of proper use of resources, reliable reporting on the results of operations and budget execution, and financial audit readiness, the department continues to face pervasive, long-standing internal control and business system challenges that not only impair its control over funds entrusted to it, but also pose continuing challenges to achieving reliable financial reporting. DOD leadership remains committed to achieving financial accountability and reliable information for day-to-day management decision making as well as financial audit readiness. However, corrective actions are not expected to be completed for several years on long-standing funds control weaknesses related to (1) training, supervision, and management oversight; (2) proper authorization, recording, documenting, and reporting of budgetary transactions; and (3) business systems controls. As a result, these weaknesses will continue to adversely affect DOD’s ability to achieve its goals for effective funds controls, including reductions in ADA violations, financial accountability, and reliable financial reporting. In addition, to the extent that DOD and its components continue to rely on data calls or manual work-arounds to achieve auditability of the SBR and other financial statements, it is unlikely that DOD will be able to produce consistent, reliable, and sustainable financial information for day-to-day decision making. We received written comments on a draft of this report from DOD’s Deputy Chief Financial Officer (CFO) on April 14, 2014, stating that the department appreciates our review of past reports as the identified deficiencies have informed DOD’s current corrective actions. The Deputy CFO expressed the department’s commitment to building a stronger business environment with regard to people, processes, and systems and noted progress in each of the three weakness areas discussed in this report, including (1) enrollment of 22,300 financial managers in the new DOD Financial Management Certification Program; (2) audit readiness assertions within several organizations, supported in part by transaction control testing; and (3) ongoing efforts to review financial and financial feeder systems for data reliability. Effective implementation of outstanding recommendations from past reports will better position the department to minimize the occurrence of ADA violations. DOD’s comments are reprinted in appendix III. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Armed Services, the Senate Committee on Appropriations, the House Committee on Oversight and Government Reform, the House Committee on Armed Services, and the House Committee on Appropriations. We also are sending copies to the Secretary of Defense; the Under Secretary of Defense (Acquisition, Technology and Logistics); the Under Secretary of Defense (Personnel and Readiness); the Under Secretary of Defense (Comptroller) and Chief Financial Officer; the Deputy Chief Financial Officer; the Director for Financial Improvement and Audit Readiness; the FIAR Governance Board; the Assistant Secretaries (Financial Management and Comptroller) of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director of the Defense Finance and Accounting Service; the Director of the Office of Management and Budget; and other interested parties. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. To determine the extent of long-standing funds control weaknesses, we analyzed 333 audit and financial reports on the Department of Defense’s (DOD) financial management operations issued over the last 7 years– including 190 DOD audit reports, 30 GAO reports, 36 DOD financial reports, and 77 DOD reports of Antideficiency Act (ADA) violations provided to GAO and identified over 1,000 funds control weaknesses. Our reports and DOD Inspector General (IG) reports covered fiscal years 2007 through 2012 and the military department audit reports covered fiscal years 2010 through the first half of fiscal year 2012. We assessed the DOD IG’s audit quality assurance procedures for assuring the reliability of data and findings presented in auditor reports. We also reviewed the military departments’ January 2012 Annual Evaluations of Funds Control Processes and Processing of ADA Violations and their Annual Financial Reports and Statements of Assurance for fiscal years 2011 and 2012. In addition, we reviewed all DOD reports of ADA violations sent to GAO in fiscal years 2007 through 2013. (See table 4.) To determine whether reported weaknesses continued, we reviewed fiscal year 2013 reports of ADA violations sent to GAO and DOD IG reports of potential ADA violations; recent auditor reports; DOD financial management reports, including DOD and military department Agency Financial Reports; 2013 Financial Improvement and Audit Readiness (FIAR) Plan Status Reports; and FIAR Guidance. While these documents are included in the 333 reports of funds control weaknesses, the related findings are not included in the 1,006 weaknesses identified through fiscal year 2012. To determine the effect of reported weaknesses, we considered problems associated with (1) proper use of resources; (2) accurate accounting and support for transactions (primarily obligations and disbursements) with regard to reports on program and project statuses, results of operations, and budget execution; and (3) financial audit readiness. We also reviewed DOD reports of ADA violations reported in fiscal years 2007 through 2012 to identify DOD’s reported causes of the violations and the corrective actions noted in these reports. To frame our discussion of identified funds control weaknesses and DOD-reported corrective actions, we grouped the identified weaknesses into three categories that are consistent with those identified in DOD and GAO reports. Many of the reports on funds control weaknesses identified more than one weakness. The three categories relate to the following areas: (1) Inadequate training, supervision, and management oversight. Supervision is day-to-day guidance by a supervisor and management oversight involves assuring adequate supervisory guidance and training as well as overall monitoring of the subject matter area. (2) Ineffective transaction controls. These controls cover proper authorization and recording of budgetary transactions, such as obligations and disbursements (outlays); maintaining adequate supporting documentation for the transactions; and proper and timely reporting of transactions and related summaries and financial reports. (3) Ineffective business systems. This category refers to business systems that do not have effective controls for recording, supporting, and reporting financial transactions, including budgetary transactions, and therefore, do not provide adequate controls over financial reporting on the results of operations and do not assure compliance with laws and regulations. To determine the status of DOD’s corrective actions to address identified funds control weaknesses, we reviewed corrective action statuses in response to mandates in National Defense Authorization Acts related to financial management competencies, skill gaps, and training; corrective actions on transaction-level accounting and financial reporting under DOD’s FIAR Plan; the status of actions to address business system weaknesses; and actions to address DOD ADA violations and DOD IG and military department audit recommendations. We met with DOD IG and military department audit officials to obtain information on open audit recommendations and discuss recurring findings of funds control weaknesses and their effect on reliable financial reporting. In addition, we met with DOD and military department financial managers and audit readiness officials to discuss their efforts to resolve findings of funds control weaknesses, including material weaknesses related to financial reporting disclosed in DOD annual Agency Financial Reports, as well as how funds control is being addressed in Statement of Budgetary Resources audit readiness initiatives and the status of those initiatives. We analyzed DOD’s FIAR Plan Status Reports and FIAR Guidance and military department audit readiness plans from November 2010 through November 2013 to identify and evaluate audit readiness efforts related to funds control. Based on our review of DOD’s audit quality procedures and our comparison of auditor reports and DOD disclosures of financial management weaknesses, we determined the reported data and information to be reliable for the purposes of our work. As shown in figure 6, we identified over 1,000 separately reported funds control weaknesses. We noted but did not count Department of Defense (DOD) financial management reports that discussed weaknesses identified in audit reports. We grouped funds control weaknesses identified in our reports analysis into three categories: (1) inadequate training, supervision, and management oversight; (2) ineffective transaction controls; and (3) inadequate business systems. These categories are consistent with those identified in DOD and GAO reports. In addition to the contact named above, Gayle L. Fischer (Assistant Director), Lisa Brownson, Francine DelVecchio, Maxine Hattery, Donald D. Holzinger, Jason Kelly, Jason Kirwan, Gregory Marchand (Assistant General Counsel), Sheila D. M. Miller, Marc Molino, Heather Rasmussen, and Robert Sharpe made key contributions to this report.
|
GAO, the DOD Inspector General (IG), and others have reported on DOD's inability to provide effective control over the use of public funds (i.e., funds control). Funds control requires obligations and expenditures to comply with applicable law. Funds control weaknesses have prevented DOD from reporting reliable financial information, including information on the use of public funds, results of operations, and financial statements, and put DOD at risk of overobligating and overexpending its appropriations in violation of the Antideficiency Act (ADA). GAO was asked to review the status of DOD's efforts to address its funds control weaknesses. GAO's objectives were to determine the (1) extent of reported weaknesses in DOD's funds control and their effect and (2) status of DOD's corrective actions to address known weaknesses. GAO analyzed 333 GAO, DOD IG, and military department audit reports; DOD reports of ADA violations; and selected DOD financial reports. GAO also examined DOD actions to address audit findings and ADA violations, including actions under DOD's FIAR Plan, and discussed corrective actions on funds control weaknesses with DOD and military department auditors and financial managers. GAO's analysis of 333 reports related to Department of Defense (DOD) funds control, issued in fiscal years 2007 through 2013, identified over 1,000 funds control weaknesses related to (1) training, supervision, and management oversight; (2) proper authorization, recording, documentation, and reporting of transactions; and (3) business system compliance with federal laws and accounting standards. Many of the reports GAO reviewed included multiple findings. GAO found that these weaknesses led DOD to make program and operational decisions based on unreliable data and impaired DOD's ability to improve its financial management. Fundamental weaknesses in funds control significantly impaired DOD's ability to (1) properly use resources, (2) produce reliable financial reports on the results of operations, and (3) meet its audit readiness goals. DOD has actions under way to address its department-wide funds control weaknesses. These actions, several of which are targeted for completion in 2017, include a DOD Financial Manager Certification Program intended to establish a framework to guide training and development of DOD's 54,000 financial management personnel at the staff, supervisory, and leadership levels; transaction control testing and corrective action plans under its Financial Improvement and Audit Readiness (FIAR) Plan for reporting on the use of budgetary resources with regard to categories of transactions, such as fund balances, outlays, military and civilian payroll, and contract pay; and testing under the FIAR Plan of material DOD component business system controls and service-provider systems and processes as well as military department actions to address enterprise resource planning system design and implementation issues. DOD leadership says it is committed to achieving effective fund controls to support financial accountability and reliable information for day-to-day management decision making and auditable financial statements. However, because some of the corrective actions on long-standing funds control weaknesses are not expected to be completed until 2017, these weaknesses, until fully resolved, will continue to adversely affect DOD's ability to achieve its goals for financial accountability, including the ability to produce consistent, reliable, and sustainable financial information for day-to-day decision making. Sustained leadership commitment will be critical to achieving success. GAO is not making recommendations in this report because DOD already has numerous actions under way to address funds control weaknesses. DOD stated that it appreciates GAO's review and that past deficiencies have informed actions it has under way to address its funds control weaknesses.
|
As shown in figure 1, residential and small business users often connect to an Internet service provider (ISP) to access the Internet. Well-known ISPs include America Online (AOL) and Comcast. Typically, ISPs market a package of services that provide homes and businesses with a pathway, or “on-ramp,” to the Internet along with services such as e-mail and instant messaging. The ISP sends the user’s Internet traffic forward to a backbone network where the traffic can be connected to other backbone networks and carried over long distances. By contrast, large businesses often maintain their own internal networks and may buy capacity from access providers that connect their networks directly to an Internet backbone network. We are using the term access providers to include ISPs as well as providers who sell access to large businesses and other users. Nonlocal traffic from both large businesses and ISPs connects to a backbone provider’s network at a “point of presence” (POP). Figure 1 depicts two hypothetical and simplified Internet backbone networks that link at interconnection points and take traffic to and from residential units through ISPs and directly from large business users. As public use of the Internet grew from the mid-1990s onward, Internet access and electronic commerce became potential targets for state and local taxation. Ideas for taxation ranged from those that merely extended existing sales or gross receipts taxes to so-called “bit taxes,” which would measure Internet usage and tax in proportion to use. Some state and local governments raised additional tax revenues and applied existing taxes to Internet transactions. Owing to the Internet’s inherently interstate nature and to issues related to taxing Internet-related activities, concern arose in Congress as to what impact state and local taxation might have on the Internet’s growth, and thus, on electronic commerce. Congress addressed this concern when, in 1998, it adopted the Internet Tax Freedom Act, which bars state and local taxes on Internet access, as well as multiple or discriminatory taxes on electronic commerce. Internet usage grew rapidly in the years following 1998, and the technology to access the Internet changed markedly. Today a significant portion of users, including home users, access the Internet over broadband communications services using cable modem, DSL, or wireless technologies. Fewer and fewer users rely on dial-up connections through which they connect to their ISP by dialing a telephone number. By 2004, some state tax authorities were taxing DSL service, which they considered to be a telecommunications service, creating a distinction between DSL and services offered through other technologies, such as cable modem, that were not taxed. Originally designed to postpone the addition of any new taxes while the Advisory Commission on Electronic Commerce studied the tax issue and reported to Congress, the moratorium was extended in 2001 for 2 years and again in 2004, retroactively, to remain in force until November 1, 2007. The 2001 extension made no other changes to the original act, but the 2004 act included clarifying amendments. The 2004 act amended language that had exempted telecommunications services from the moratorium. Recognizing state and local concerns about their ability to tax voice services provided over the Internet, it also contained language allowing taxation of telephone service using Voice over Internet Protocol (VoIP). Although the 2004 amendments extended grandfathered protection generally to November 2007, grandfathering extended only to November 2005 for taxes subject to the new moratorium but not to the original moratorium. To determine the scope of the Internet tax moratorium, we reviewed the language of the moratorium, the legislative history of the 1998 act and the 2004 amendments, and associated legal issues. To determine the impact of the moratorium on state and local revenues, we worked in stages. First, we reviewed studies of revenue impact done by CBO, FTA, and the staff of the Multistate Tax Commission and discussed relevant issues with federal representatives, state and local government and industry associations, and companies providing Internet access services. Then, we used structured interviews to do case studies in eight states that we chose as described earlier. We did not intend the eight states to represent any other states. For each selected state, we focused on specific aspects of its tax system by using our structured interview and collecting relevant documentation. For instance, we reviewed the types and structures of Internet access service taxes, the revenues collected from those taxes, officials’ views of the significance of the moratorium to their government’s financial situation, and their opinions of any implications to their states of the new definition of Internet access. We also learned whether localities within the states were taxing access services. When issues arose, we contacted other states and localities to increase our understanding of these issues. We discussed with state officials how they derived the estimates they gave us of tax dollars collected and how firm these numbers were. We could not verify the estimates, and CBO supplemented estimates that it received from states. Nevertheless, based on other information we obtained, the state estimates appeared to provide a sense of the order of magnitude of the numbers compared to state tax revenues. We did our work from February through December 2005 in accordance with generally accepted government auditing standards. The moratorium bars taxes on the service of providing access, which includes whatever an access provider reasonably bundles in its access offering to consumers. On the other hand, the moratorium does not prohibit taxes on acquired services, referring to goods and services that an access provider acquires to enable it to bundle and provide its access package to its customers. However, some providers and state officials have expressed a different view, believing the moratorium barred taxing acquired services in addition to bundled access services. “a service that enables users to access content, information, electronic mail, or other services offered over the Internet, and may also include access to proprietary content, information, and other services as part of a package of services offered to users. The term ‘Internet access’ does not include telecommunications services, except to the extent such services are purchased, used, or sold by a provider of Internet access to provide Internet access.” (italics provided) As shown in the simplified illustration in figure 2, the items reasonably bundled in a tax-exempt Internet access package may include e-mail, instant messaging, and Internet access itself. Internet access, in turn, includes broadband services, such as cable modem and DSL services, which provide continuous, high-speed access without tying up wireline telephone service. As figure 2 also illustrates, a tax-exempt bundle does not include video, traditional wireline telephone service referred to as “plain old telephone service” (POTS), or VoIP. These services are subject to tax. For simplicity, the figure shows a number of services transmitted over one communications line. In reality, a line to a consumer may support just one service at a time, as is typically the case for POTS, or it may simultaneously support a variety of services, such as television, Internet access, and VoIP. Our reading of the 1998 law and the relevant legislative history indicates that Congress had intended to bar taxes on services bundled with access. However, there were different interpretations about whether DSL service could be taxed under existing law, and some states taxed DSL. The 2004 amendment was aimed at making sure that DSL service bundled with access could not be taxed. See the appendix for further explanation. Figure 3 shows how the nature and tax status of the Internet access services just described differ from the nature and tax status of services that an ISP acquires and uses to deliver access to its customers. An ISP in the middle of figure 3 acquires communications and other services and incidental supplies (shown on the left side of the figure) in order to deliver access services to customers (shown on the right side of the figure). We refer to the acquisitions on the left side as purchases of “acquired services.” For example, acquired services include ISP leases of high- speed communications capacity over wire, cable, or fiber to carry traffic from customers to the Internet backbone. Purchases of acquired services are subject to taxation, depending on state law, because the moratorium does not apply to acquired services. As noted above, the moratorium applies only to taxes imposed on “Internet access,” which is defined in the law as “a service that enables users to access content, information, electronic mail, or other services offered over the Internet.…” In other words, it is the service of providing Internet access to the end user—not the acquisition of capacity to do so—that constitutes “Internet access” subject to the moratorium. Some providers and state officials have construed the moratorium as barring taxation of acquired services, reading the 2004 amendments as making acquired services tax exempt. However, as indicated by the language of the statute, the 2004 amendments did not expand the definition of “Internet access,” but rather amended the exception from the definition to allow certain “telecommunication services” to qualify for the moratorium if they are part of the service of providing Internet access. A tax on acquired services is not a tax directly imposed on the service of providing Internet access. Our view that acquired services are not subject to the moratorium on taxing Internet access is based on the language and structure of the statute, as described further in the appendix. We acknowledge that others have different views about the scope of the moratorium. Congress could, of course, deal with this issue by amending the statute to explicitly address the tax status of acquired services. As noted above, some providers and state officials have construed the moratorium as barring taxation of acquired services. Some provider representatives said that acquired services were not taxable at the time we contacted them and had never been taxable. Others said that acquired services were taxable when we contacted them but would become tax exempt in November 2005 under the 2004 amendments, the date they assumed that taxes on acquired services would no longer be grandfathered. As shown in table 1, officials from four out of the eight states we studied— Kansas, Mississippi, Ohio, and Rhode Island—also said their states would stop collecting taxes on acquired services, as of November 1, 2005, in the case of Kansas and Ohio whose collections have actually stopped, and later for the others. These states roughly estimated the cost of this change to them to be a little more than $40 million in revenues that were collected in 2004. An Ohio official indicated that two components comprised most of the dollar amounts of taxes collected from these services in 2004: $20.5 million from taxes on telecommunications services and property provided to ISPs and Internet backbone providers, and $9.1 million from taxes for private line services (such as high-capacity T-1 and T-3 lines) and 800/wide-area telecommunications services that the official said would be exempt due to the moratorium. The rough estimates in table 1 are subject to the same limitations described in the next section for the state estimates of all taxes collected related to Internet access. According to CBO data, grandfathered taxes in the states CBO studied were a small percentage of those states’ tax revenues. However, because it is difficult to know which states, if any, might have chosen to tax Internet access services and what taxes they might have chosen to use if no moratorium had ever existed, the total revenue implications of the moratorium are unclear. In general, any future impact related to the moratorium will differ from state to state. In 2003, CBO reported how much state and local governments that had grandfathered taxes on dial-up and DSL services would lose in revenues if the grandfathering were eliminated. The fact that these estimates represented a small fraction of state tax revenues is consistent with other information we obtained. In addition, the enacted legislation was narrower than what CBO reviewed, meaning that CBO’s stated concerns about VoIP and taxing providers’ income and assets would have dissipated. CBO provided two estimates in 2003 that, when totaled, showed that no longer allowing grandfathered dial-up and DSL service taxes would cause state and local governments to lose from more than $160 million to more than $200 million annually by 2008. According to a CBO staff member, this estimate included some amounts for what we are calling acquired services that, as discussed in the previous section, would not have to be lost. CBO provided no estimates of revenues involved for governments not already assessing the taxes and said it could not estimate the size of any additional impacts on state and local revenues of the change in the definition of Internet access. Further, according to a CBO staff member, CBO’s estimates did not include any lost revenues from taxes on cable modem services. In October 2003, around the time of CBO’s estimates, the number of cable home Internet connections was 12.6 million, compared to 9.3 million home DSL connections and 38.6 million home dial-up connections. CBO first estimated that as many as 10 states and several local governments would lose $80 million to $120 million annually, beginning in 2007, if the 1998 grandfather clause were repealed. Its second estimate showed that, by 2008, state and local governments would likely lose more than $80 million per year from taxes on DSL service. The CBO numbers are a small fraction of total state tax revenue amounts. For example, the $80 million to $120 million estimate for the states with originally grandfathered taxes for 2007 was about 0.1 percent of tax revenues in those states for 2004—3 years earlier. The fact that CBO estimates are a small part of state tax revenues is consistent with information we obtained from our state case studies and interviews with providers. For instance, after telling us whether various access-related services, including cable modem service, were subject to taxation in their jurisdictions, the states collecting taxes gave us rough estimates of how much access-service related tax revenues they collected for 2004 for themselves and their localities, if applicable. (See table 2). All except two collected $10 million or less. The states made their estimates by assuming, for instance, that access service-related tax revenues were a certain percentage of state telecommunications sales tax revenues, by reviewing providers’ returns, or by making various calculations starting with census data. Most estimates provided us were more ballpark approximations than precise computations, and CBO staff expressed a healthy skepticism toward some state estimates they received. They said that the supplemental state-by- state information they developed sometimes produced lower estimates than the states provided. According to others knowledgeable in the area, estimates provided us were imprecise because when companies filed sales or gross receipts tax returns with states, they did not have to specifically identify the amount of taxes they received from providing Internet access- related services to retail consumers or to other providers. As discussed earlier, sales to other providers remain subject to taxation, depending on state law. Some providers told us they did not keep records in such a way as to be able to readily provide that kind of information. Also, although states reviewed tax compliance by auditing taxpayers, they could not audit all providers. The dollar amounts in table 2 include amounts, where provided, for local governments within the states. For instance, Kansas’s total includes about $2 million for localities. In this state as well as in others we studied, local jurisdictions were piggybacking on the state taxes, although the local tax rates could differ from each other. State tax officials from our case study states who commented to us on the impacts of the revenue amounts did not consider them significant. Similarly, state officials voiced concerns but did not cite nondollar specifics when describing any possible impact on their state finances arising from no longer taxing Internet access services. However, one noted that taking away Internet access as a source of revenue was another step in the erosion of the state’s tax base. Other state and local officials observed that if taxation of Internet access were eliminated, the state or locality would have to act somehow to continue meeting its requirement for a balanced budget. At the local level, officials told us that a revenue decrease would reduce the amount of road maintenance that could be done or could adversely affect the number of employees available for providing government services. Because it is difficult to predict what states would have done to tax Internet access services had Congress not intervened when it did, it is hard to estimate the amount of revenue that was not raised because of the moratorium. For instance, at the time the first moratorium was being considered in 1998, the Department of Commerce reported Internet connections for less than a fifth of U.S. households, much less than the half of U.S. households reported 6 years later. Access was typically dial-up. As states and localities saw the level of Internet connections rising and other technologies becoming available, they might have taxed access services if no moratorium had been in place. Taxes could have taken different forms. For example, jurisdictions might have even adopted bit taxes based on the volume of digital information transmitted. The number of states collecting taxes on access services when the first moratorium was being considered in early 1998 was relatively small, with 13 states and the District of Columbia collecting these taxes, according to the Congressional Research Service. Five of those jurisdictions later eliminated or chose not to enforce their tax. In addition, not all 37 other states would have taxed access services related to the Internet even if they could have. For example, California had already passed its own Internet tax moratorium in August 1998. Given that some states never taxed access services while relatively few Internet connections existed, that some stopped taxing access services, and that others taxed DSL service, it is unclear what jurisdictions would have done if no moratorium had existed. However, the relatively early initiation of a moratorium reduced the opportunity for states inclined to tax access services to do so before Internet connections became more widespread. Although as previously noted the impact of eliminating grandfathering would be small in states studied by CBO or by us, any future impact related to the moratorium will vary on a state-by-state basis for many reasons. State tax laws differed significantly from each other, and states and providers disagreed on how state laws applied to the providers. As shown in table 3, states taxed Internet access using different tax vehicles imposed on diverse tax bases at various rates. The tax used might be generally applicable to a variety of goods and services, as in Kansas, which did not impose a separate tax on communications services. There, the state’s general sales tax applied to the purchase of communications services by access providers at an average rate of 6.6 percent, combining state and average local tax rates. As another example, North Dakota imposed a sales tax on retail consumers’ communications services, including Internet access services, at an average state and local combined rate of 6 percent. Our case study states showed little consistency in the base they taxed in taxing services related to Internet access. States imposed taxes on different transactions and populations. North Dakota and Texas taxed only services delivered to retail consumers. In a type of transaction which, as discussed earlier, we do not view as subject to the moratorium, Kansas and Mississippi taxed acquired communications services purchased by access providers. Ohio and Rhode Island taxed both the provision of access services and acquired services, and California and Virginia officials told us their states taxed neither. States also provided various exemptions from their taxes. Ohio exempted residential consumers, but not businesses, from its tax on access services, and Texas exempted the first $25 of monthly Internet access service charges from taxation. Some state and local officials and company representatives held different opinions about whether certain taxes were grandfathered and about whether the moratorium applied in various circumstances. For example, some providers’ officials questioned whether taxes in North Dakota, Wisconsin, and certain cities in Colorado were grandfathered, and whether those jurisdictions were permitted to continue taxing. Providers disagreed among themselves about how to comply with the tax law of states whose taxes may or may not have been grandfathered. Some providers told us they collected and remitted taxes to the states even when they were uncertain whether these actions were necessary; however, they told us of others that did not make payments to the taxing states in similarly uncertain situations. In its 2003 work, CBO had said that some companies challenged the applicability of Internet access taxes to the service they provided and thus might not have been collecting or remitting them even though the states believed they should. Because of all these state-by-state differences and uncertainties, the impact of future changes related to the moratorium would vary by state. Whether the moratorium were lifted or made permanent and whether grandfathering were continued or eliminated, states would be affected differently from each other. We showed staff members of CBO, officials of FTA, and representatives of telecommunications companies assembled by the United States Telecom Association a draft of our January 2006 report and asked for oral comments. On January 5, 2006, CBO staff members, including the Chief of the State and Local Government Unit, Cost Estimates Unit, said we fairly characterized CBO information and suggested clarifications that we made as appropriate. In one case, we noted more clearly that CBO supplemented its dollar estimates of revenue impact with a statement that other potential revenue losses could potentially grow by an unquantified amount. On January 6, 2006, FTA officials, including the Executive Director, said that our legal conclusion was clearly stated and, if adopted, would be helpful in clarifying which Internet access-related services are taxable and which are not. However, they expressed concern that the statute could be interpreted differently regarding what might be reasonably bundled in providing Internet access to consumers. A broader view of what could be included in Internet access bundles would result in potential revenue losses much greater than we indicated. However, as explained in the appendix, we believe that what is bundled must be reasonably related to accessing and using the Internet. FTA officials were also concerned that our reading of the 1998 law regarding the taxation of DSL services is debatable and suggests that states overreached by taxing them. We recognize that Congress acted in 2004 to address different interpretations of the statute, and we made some changes to clarify our presentation. We acknowledge there were different views on this matter, and we are not attributing any improper intent to the states’ actions. When meeting with us, representatives of telecommunications companies said they would like to submit comments in writing. Their comments argue that the 2004 amendments make acquired services subject to the moratorium and therefore not taxable, and that the language of the statute and the legislative history support this position. In response, we made some changes to simplify the appendix. That appendix, along with the section of the testimony on bundled access services and acquired services, contains an explanation of our view that the language and structure of the statute support our interpretation. Mr. Chairman, Mr. Vice Chairman, and Members of the Committee, this concludes my testimony. I would be happy to answer any questions you may have at this time. For further information, please contact James R. White on (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Michael Springer, Assistant Director; Edda Emmanuelli-Perez; Lynn H. Gibson; Bert Japikse; Shirley A. Jones; Lawrence M. Korb; Donna L. Miller; Walter K. Vance; and Bethany C. Widick. The moratorium bars taxes on the service of providing access, which includes whatever an access provider reasonably bundles in its access offering to consumers. On the other hand, the moratorium does not bar taxes on acquired services. As noted earlier, the 2004 amendments followed a period of significant growth and technological development related to the Internet. By 2004, broadband communications technologies were becoming more widely available. They could provide greatly enhanced access compared to the dial-up access technologies widely used in 1998. These broadband technologies, which include cable modem service built upon digital cable television infrastructure as well as digital subscriber line (DSL) service, provide continuous, high-speed Internet access without tying up wire-line telephone service. Indeed, cable and DSL facilities could support multiple services—television, Internet access, and telephone services—over common coaxial cable, fiber, and copper wire media. The Internet Tax Freedom Act bars “taxes on Internet access” and defines “Internet access” as a service that enables “users to access content, information, electronic mail, or other services offered over the Internet.” The term Internet access as used in this context includes “access to proprietary content, information, and other services as part of a package of services offered to users.” The original act expressly excluded “telecommunications services” from the definition. As will be seen, the act barred jurisdictions from taxing services such as e-mail and instant messaging bundled by providers as part of their Internet access package; however, it permitted dial-up telephone service, which was usually provided separately, to be taxed. The original definition of Internet access, exempting “telecommunications services,” was changed by the 2004 amendment. Parties seeking to carve out exceptions that could be taxed had sought to break out and treat DSL services as telecommunications services, claiming the services were exempt from the moratorium even though they were bundled as part of an Internet access package. State and local tax authorities began taxing DSL service, creating a distinction between DSL and services offered using other technologies, such as cable modem service, a competing method of providing Internet access that was not to be taxed. The 2004 amendment was aimed at making sure that DSL service bundled with access could not be taxed. The amendment excluded from the telecommunications services exemption telecommunications services that were “purchased, used, or sold by a provider of Internet access to provide Internet access.” The fact that the original 1998 act exempted telecommunications services shows that other reasonably bundled services remained a part of Internet access service and, therefore, subject to the moratorium. Thus, communications services such as cable modem services that are not classified as telecommunications services are included under the moratorium. As emphasized by numerous judicial decisions, we begin the task of construing a statute with the language of the statute itself, applying the canon of statutory construction known as the plain meaning rule. E.g. Hartford Underwriter Insurance Co. v. Union Planers Bank, N.A., 530 U.S. 1 (2000); Robinson v. Shell Oil Co., 519 U.S. 337 (1997). Singer, 2A, Sutherland Statutory Construction, §§ 46:1, 48A:11, 15-16. Thus, under the plain meaning rule, the primary means for Congress to express its intent is the words it enacts into law and interpretations of the statute should rely upon and flow from the language of the statute. “The term ‘Internet access’ means a service that enables users to access content, information, electronic mail, or other services offered over the Internet.…The term “Internet access” does not include telecommunications services, except to the extent such services are purchased, used, or sold by a provider of Internet access to provide internet access.” Section 1105(5). The language added in 2004—exempting from “telecommunications services” those services that are “purchased, used, or sold” by a provider in offering Internet access—has been read by some as expanding the “Internet access” to which the tax moratorium applies, by barring taxes on “acquired services.” Those who would read the moratorium expansively take the view that everything acquired by Internet service providers (ISP) (everything on the left side of figure 3) as well as everything furnished by them (everything in the middle of figure 3) is exempt from tax. In our view, the language and structure of the statute do not permit the expansive reading noted above. “Internet access” was originally defined and continues to be defined for purposes of the moratorium as the service of providing Internet access to a user. Section 1105(5). It is this transaction, between the Internet provider and the end user, which is nontaxable under the terms of the moratorium. The portion of the definition that was amended in 2004 was the exception: that is, telecommunication services are excluded from nontaxable “Internet access,” except to the extent such services are “purchased, used, or sold by a provider of Internet access to provide Internet access.” Thus, we conclude that the fact that services are “purchased, used, or sold” by an Internet provider has meaning only in determining whether these services can still qualify for the moratorium notwithstanding that they are “telecommunications services;” it does not mean that such services are independently nontaxable irrespective of whether they are part of the service an Internet provider offers to an end user. Rather, a service that is “purchased, used, or sold” to provide Internet access is not taxable only if it is part of providing the service of Internet access to the end user. Such services can be part of the provision of Internet access by a provider who, for example, “purchases” a service for the purpose of bundling it as part of an Internet access offering; “uses” a service it owns or has acquired for that purpose; or simply “sells” owned or acquired services as part of its Internet access bundle. In addition, we read the amended exception as applying only to services that are classified as telecommunications services under the 1998 act as amended. In fact, the moratorium defines the term “telecommunications services” with reference to its definition in the Communications Act of 1934, under which DSL and cable modem service are no longer classified as telecommunications services. Moreover, under the Communications Act, the term telecommunications services applies to the delivery of services to the end user who determines the content to be communicated; it does not apply to communications services delivered to access service providers by others in the chain of facilities through which Internet traffic may pass. Thus, since broadband services are not telecommunications services, the exception in the 1998 act does not apply to them, and they are not affected by the exception. The best evidence of statutory intent is the text of the statute itself. While legislative history can be useful in shedding light on the intent of the statute or to resolve ambiguities, it is not to be used to inject ambiguity into the statutory language or to rewrite the statute. E.g., Shannon v. United States 512 U.S. 573, 583 (1994). In our view, the definition of Internet access is unambiguous, and, therefore, it is unnecessary to look beyond the statute to discern its meaning from legislative history. We note, however, that consistent with our interpretation of the statute, the overarching thrust of changes made by the 2004 amendments to the definition of Internet access was to take remedial correction to assure that broadband services such as DSL were not taxable when bundled with an ISP’s offering. While there are some references in the legislative history to “wholesale” services, backbone, and broadband, many of these pertained to earlier versions of the bill containing language different from that which was ultimately enacted. The language that was enacted, using the phrase “purchased, used, or sold by a provider of Internet access” was added through the adoption of a substitute offered by Senator McCain, 150 Cong. Rec. S4402, which was adopted following cloture and agreement to several amendments designed to narrow differences between proponents and opponents of the bill. Changes to legislative language during the consideration of a bill may support an inference that in enacting the final language, Congress intended to reject or work a compromise with respect to earlier versions of the bill. Statements made about earlier versions carry little weight. Landgraf v. USI Film Products, 511 U.S. 244, 255-56 (1994). Singer, 2A, Sutherland Statutory Construction, § 48:4. In any event, the plain language of the statute remains controlling where, as we have concluded, the language and the structure of the statute are clear on their face. “The Committee intends for the tax exemption for telecommunications services to apply whenever the ultimate use of those telecommunications services is to provide Internet access. Thus, if a telecommunications carrier sells wholesale telecommunications services to an Internet service provider that intends to use those telecommunications services to provide Internet access, then the exemption would apply.” At the time the 2003 report was drafted, the sentence of concern in the draft legislation read, “Such term does not include telecommunications services, except to the extent such services are used to provide Internet access.” As adopted, the wording became, “The term ‘Internet access’ does not include telecommunications services, except to the extent such services are purchased, used, or sold by a provider of Internet access to provide Internet access.” The amended language thus focuses on the package of services offered by the access provider, not on the act of providing access alone. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
According to one report, at the end of 2006, about 92 million U.S. adults used the Internet on a typical day. As public use of the Internet grew from the mid-1990s onward, Internet access became a potential target for state and local taxation. In 1998, Congress imposed a moratorium temporarily preventing state and local governments from imposing new taxes on Internet access. Existing state and local taxes were grandfathered. In amending the moratorium in 2004, Congress required GAO to study its impact on state and local government revenues. The objectives of the resulting 2006 report were to determine the scope of the moratorium and its impact, if any, on state and local revenues. This testimony is based on that report (GAO-06-273). The Internet tax moratorium bars taxes on Internet access services provided to end users. GAO's interpretation of the law is that the bar on taxes includes whatever an access provider reasonably bundles to consumers, including e-mail and digital subscriber line (DSL) services. The moratorium does not bar taxes on acquired services, such as high-speed communications capacity over fiber, acquired by Internet service providers (ISP) and used to deliver Internet access. However, some states and providers have construed the moratorium as barring taxation of acquired services. Some officials told GAO when it was preparing its report that their states would stop collecting such taxes as early as November 1, 2005, the date they assumed that taxes on acquired services would lose their grandfathered protection. According to GAO's reading of the law, these taxes are not barred since a tax on acquired services is not a tax on Internet access. In comments, telecommunications industry officials continued to view acquired services as subject to the moratorium and exempt from taxation. As noted above, GAO disagrees. In addition, Federation of Tax Administrators officials expressed concern that some might have a broader view of what could be included in Internet access bundles. However, GAO's view is that what is included must be reasonably related to providing Internet access. The revenue impact of eliminating grandfathering in states studied by the Congressional Budget Office (CBO) would be small, but the moratorium's total revenue impact has been unclear and any future impact would vary by state. In 2003, when CBO reported how much states and localities would lose annually by 2007 if certain grandfathered taxes were eliminated, its estimate for states with grandfathered taxes in 1998 was about 0.1 percent of those states' 2004 tax revenues. Because it is hard to know what states would have done to tax access services if no moratorium had existed, the total revenue implications of the moratorium are unclear. In general, any future moratorium-related impact will differ by state. Tax law details and tax rates varied among states. For instance, North Dakota taxed access service delivered to retail consumers, and Kansas taxed communications services acquired by ISPs to support their customers.
|
The Army’s conversion to a modular force encompasses the Army’s total force—active Army, Army National Guard, and Army Reserve—and directly affects not only the Army’s combat units, but related command and support organizations. A key to the Army’s new modular force design is embedding within combat brigades reconnaissance, logistics, and other support units that previously made up parts of division-level and higher- level command and support organizations, allowing the brigades to operate independently. Restructuring these units is a major undertaking because it requires more than just the movement of personnel or equipment from one unit to another. The Army’s new modular units are designed, equipped, and staffed differently than the units they replace; therefore, successful implementation of this initiative will require changes such as new equipment and a different mix of skills and occupational specialties among Army personnel. By 2011, the Army plans to have reconfigured its total force—to include active and reserve components and headquarters, combat, and support units—into the modular design. The foundation of the modular force is the creation of modular brigade combat teams—combat maneuver brigades that will have a common organizational design and are intended to increase the rotational pool of ready units. Modular combat brigades (depicted in fig. 1) will have one of three standard designs—heavy brigade, infantry brigade, or Stryker brigade. Until it revised its plans in early 2006, the Army had planned to have a total of 77 active component and National Guard modular combat brigades by expanding the Army’s existing 33 combat brigades in the active component into 43 modular combat brigades by 2007, and by creating 34 modular combat brigades in the National Guard by 2010 from existing brigades and divisions that have historically been equipped well below requirements. To rebalance joint ground force capabilities, the 2006 QDR determined the Army should have a total of 70 modular combat brigades— 42 active brigades and 28 National Guard brigades. Table 1 shows the Army’s planned numbers of heavy, infantry, and Stryker combat brigades in the active component and National Guard. At the time of this report, the Army was in the process of revising its modular combat brigade schedule to convert its active component combat brigades by fiscal year 2010 instead of 2007 as previously planned, and convert National Guard combat brigades by fiscal year 2008 instead of 2010. Table 2 shows the Army’s schedule that reflects these changes as of March 2006. According to the Army, this larger pool of available combat units will enable it to generate both active and reserve component forces in a rotational manner. To do this, the Army is developing plans for a force rotation model in which units will rotate through a structured progression of increased unit readiness over time. Units will progress through three phases of operational readiness cycles, culminating in full mission readiness and availability to deploy. For example, the Army plans for active service members to be at home for 2 years following each deployment of up to 1 year. The Army’s objective is for the new modular combat brigades, which will include about 3,000 to 4,000 personnel, to have at least the same combat capability as a brigade under the current division-based force, which range from 3,000 to 5,000 personnel. Since there will be more combat brigades in the force, the Army believes its overall combat capability will be increased as a result of the restructuring, providing added value to combatant commanders. Although somewhat smaller in size, the new modular combat brigades are expected to be as capable as the Army’s existing brigades because they will have different equipment, such as advanced communications and surveillance equipment, and a different mix of personnel and support assets. The Army’s organizational designs for the modular brigades have been tested by its Training and Doctrine Command’s Analysis Center against a variety of scenarios, and the Army has found the new designs to be as capable as the existing division-based brigades in modeling and simulations. The Army’s cost estimate for modularity through fiscal year 2011 is $52.5 billion as of April 2006. Of this $52.5 billion estimate, $41 billion, or 78 percent, is planned to be spent on equipment for active and reserve units, with the remaining $11.5 billion allocated to military construction, facilities, sustainment, and training (see table 3). In addition, Army leaders have recently stated they may seek additional funds after 2011 to procure more equipment for modular restructuring. The Army has made progress in creating active component modular combat brigades, but it is not meeting its equipping goals for these brigades and has yet to complete the development of its rotational equipping strategy, which raises concerns about the extent to which brigades will be equipped in the near and longer term. Moreover, brigades will initially lack planned levels of key equipment, including items that provide enhanced intelligence, situational awareness, and network capabilities needed to help the Army achieve its objective of creating combat brigades that are able to operate on their own as part of a more mobile, rapidly deployable, joint, expeditionary force. In addition, because of existing equipment shortages, the Army National Guard will likely face even greater challenges providing the same types of equipment for its 28 planned modular combat brigades. To mitigate equipment shortages, the Army has developed a strategy to provide required levels of equipment to deploying active component and National Guard units, while allocating lesser levels of remaining equipment to other nondeploying units. However, the Army has not yet completed key details of this strategy, including determining the levels of equipment it needs to support this strategy, assessing the operational risk of not fully equipping all units, or providing to Congress information about these plans so it can assess the Army’s current and long-term equipment requirements and funding plans. The Army faces challenges meeting its equipping goals for its active modular combat brigades both in the near and longer term. As of February 2006, the Army had converted 19 modular combat brigades in the active force. According to the Army Campaign Plan, which established time frames and goals for the modular force conversions, each of these units is expected to have on hand at least 90 percent of its required major equipment items within 180 days after its new equipment requirements become effective. We reviewed data from several active brigades that had reached the effective date for their new equipment requirements by February 2006, and found that all of these brigades reported significant shortages of equipment 180 days after the effective date of their new equipment requirements, falling well below the equipment goals the Army established in its Campaign Plan. Additionally, the Army is having difficulty providing equipment to units undergoing their modular conversion in time for training prior to operational deployments, and deploying units often do not receive some of their equipment until after their arrival in theater. At the time of our visits, officials from three Army divisions undergoing modular conversion expressed concern over the lack of key equipment needed for training prior to deployment. The Army already faced equipment shortages before it began its modular force transformation and is wearing out significant quantities of equipment in Iraq, which could complicate plans for fully equipping new modular units. By creating modular combat brigades with standardized designs and equipment requirements, the Army believed that it could utilize more of its total force, thereby increasing the pool of available and ready forces to meet the demands of sustained rotations and better respond to an expected state of continuous operations. Also, by comparably equipping all of these units across the active component and National Guard, the Army further believes it will be able to discontinue its practice of allocating limited resources, including equipment, based on a system of tiered readiness, which resulted in lower priority units in both active and reserve components having significantly lower levels of equipment and readiness than the higher priority units. However, because of the need to establish a larger pool of available forces to meet the current high pace of operational commitments, the Army’s modular combat brigade conversion schedule is outpacing the planned acquisition or funding for some equipment requirements. The Army has acknowledged that funding does not match its modular conversion schedule and that some units will face equipment shortages in the early years of transformation. According to Army officials, the Army may continue to seek funding to better equip its modular forces beyond 2011. For example, according to Army officials, funds programmed for the Army’s tactical wheeled vehicle modernization strategy will not meet all of its requirements for light, medium, and heavy tactical vehicles and trucks through fiscal year 2011. In 2007, when 38 of 42 planned active component brigades are expected to complete their modular conversions, the Army expects to have only about 62 percent of the heavy trucks it needs to meet its requirements for these brigades. New higher requirements for trucks for the modular brigades added to an existing shortage of trucks in the Army’s inventory. In addition, battle damage and losses along with higher- than-normal wear and tear on Army vehicles from current operations in Iraq and Afghanistan are contributing to this shortfall. While the Army plans to eventually fill these shortages through a combination of new procurement and modernization of its existing truck fleet, Army officials told us that the higher requirement for trucks is currently unaffordable within its near-term budget authority. Until the Army is able to meet its modular combat brigade design requirement for trucks, these brigades will not have their envisioned capability to conduct their own logistical support operations if necessary without requiring the augmentation of external combat and combat-service support forces. Active modular combat brigades will initially lack required numbers of some of the key equipment that Army force design analyses determined essential for achieving their planned capabilities. Two primary objectives underlying the Army’s modular force designs and concepts are to (1) create more combat forces within the Army’s current end strength that are as lethal as the division-based brigades they are replacing and (2) organize, staff, and equip these units to be more responsive, rapidly deployable, and better able to operate on their own compared to division-based brigades. Army force designers identified a number of key organizational, personnel, and equipment enablers they determined must be present for the modular combat brigades to be as lethal as the division-based brigades they are replacing. They include key battle command systems that are intended to provide modular combat brigades the latest command and control technology for improved situational awareness; advanced digital communications systems to provide secure high-speed communications links at the brigade level; and advanced sensors to provide modular combat brigades with their own intelligence-gathering, reconnaissance, and target-acquisition capabilities. We reviewed equipping plans for several command and control, communications, and reconnaissance systems to determine the Army’s timelines for providing active modular combat brigades some of the key equipment they need to achieve their planned capabilities and function as designed. According to Army officials responsible for managing the distribution and fielding of equipment, the Army will not have all of this equipment on hand to meet the new modular force design requirements by 2007, when 38 of 42 active component modular combat brigades are to complete their modular conversions. These shortfalls are due to a range of reasons, but primarily because the modular conversion schedule is outpacing the planned acquisition or funding. For example, The Army does not expect to meet until at least 2012 its modular combat brigade requirements for Long-Range Advanced Scout Surveillance Systems, an advanced visual sensor that provides long-range surveillance capability to detect, recognize, and identify distant targets. The Army decided that it cannot meet design requirements within its current budget for Force XXI Battle Command Brigade and Below (FBCB2), a battle command component that provides real-time situational awareness information through identification and tracking of friendly forces to control battlefield maneuvers and operations. Moreover, because it has been in full production for less than 2 years, FBCB2 production has not kept pace with the new higher modular force FBCB2 requirements. As a result, the Army plans to provide active heavy and infantry brigades with less than half of their design requirement for FBCB2 through at least 2007. The Army plans to meet only 85 percent of its requirements across the force for Single Channel Ground and Airborne Radio Systems, a command and control network radio system that provides voice and data communications capability in support of command and control operations, due to a funding decision. The Army’s design requirement for Shadow tactical unmanned aerial vehicle systems was to have one system composed of seven air vehicles per modular combat brigade, but because the Army lacks adequate numbers of air vehicle operators and maintainers, it decided to field the Shadow systems with four air vehicles instead. The Army’s schedule for the acquisition of Joint Network Node—a key communications system that provides secure high-speed computer network connection for data transmission down to the battalion level— could be delayed. According to Army officials, DOD recently decided to require the Army to have Joint Network Node undergo developmental and operational testing prior to further acquisition, which could delay equipping modular combat brigades. The systems discussed above are key to achieving the benefits Army officials expect to achieve with a modular force. For example, the Army decided to structure its new modular combat brigades with two maneuver battalions each instead of three battalions each, even though Army analysis showed that brigades with three maneuver battalions have several advantages and the Army’s former division-based brigades have three battalions. The Army’s decision to approve a brigade design with two maneuver battalions was made largely because of affordability concerns. However, the Army determined that brigades with two maneuver battalions could be as effective in combat as its division-based brigades provided they have the right mix of maneuver companies and enablers such as the systems discussed above. Until the Army is able to provide modular units with required quantities of these enablers, it is not clear whether the new brigades are as capable as the division-based brigades they are replacing. In addition to the challenges the Army faces in providing active component modular combat brigades the equipment necessary for meeting expected capabilities, the Army will face greater challenges meeting its equipping requirements for its 28 planned National Guard combat brigades. The Army’s modular force concept is intended to transform the National Guard from a strategic standby force to a force that is to be organized, staffed, and equipped comparable to active units for involvement in the full range of overseas operations. As such, National Guard combat units will enter into the Army’s new force rotational model in which, according to the Army’s plans, Guard units would be available for deployment 1 year out of 6 years. However, Guard units have previously been equipped at less than wartime readiness levels (often at 65 to 75 percent of requirements) under the assumption that there would be sufficient time for Guard forces to obtain additional equipment prior to deployment. Moreover, as of July 2005, the Army National Guard had transferred more than 101,000 pieces of equipment from nondeploying units to support Guard units’ deployments overseas. As we noted in our 2005 report on National Guard equipment readiness, National Guard Bureau officials estimated that the Guard’s nondeployed units had only about 34 percent of their essential warfighting equipment as of July 2005 and had exhausted inventories of 220 critical items. Although the Army says it will invest $21 billion into equipping and modernizing the Guard through 2011, Guard units will start their modular conversions with less and much older equipment than most active units. This will add to the challenge the Army faces in achieving its plans and timelines for equipping Guard units at comparable levels to active units and fully meeting the equipping needs across both components. Moreover, the Army National Guard believes that even after the Army’s planned investment, the Army National Guard will have to accept risk in certain equipment, such as tactical wheeled vehicles, aircraft, and force protection equipment. Because the Army realized that it would not have enough equipment in the near term to simultaneously equip modular combat brigades at 100 percent of their requirements, the Army is developing a new equipping strategy as part of its force rotation model; however, this strategy is not yet completed because the Army has not finalized equipping requirements for this new strategy or assessed the operational risk of not fully equipping all units. Under the force rotation model, the Army plans to provide increasing amounts of equipment to units as they move through training phases and near readiness for potential deployment so they would be ready to respond quickly if needed with fully equipped forces. The Army believes that over time, equipping units in a rotational manner will enable it to better allocate available equipment and help manage risk associated with specific equipment shortages. Under this strategy, brigades will have three types of equipment sets—a baseline set, a training set, and a deployment set. The baseline set would vary by unit type and assigned mission and the equipment it includes could be significantly reduced from amounts the modular brigades are designed to have. Training sets would include more of the equipment units will need to be ready for deployment, but units would share the equipment that would be located at training sites throughout the country. The deployment set would include all equipment needed for deployment, including theater- specific equipment, high-priority items provided through operational needs statements, and equipment from Army prepositioned stock. With this rotational equipping approach, the Army believes it can have up to 14 active combat brigades and up to 5 Army National Guard combat brigades equipped and mission ready at any given time. While the Army has developed a general proposal to equip both active and Army National Guard units within the force rotation model, it has not yet fully developed specific equipment requirements, including the types and quantities of items, required in each phase of the model. As of March 2006, the Army was still developing proposals for what would be included in the three equipment sets as well as the specific equipping requirements for units. Figure 2 shows the Army’s three-phase force rotation model. The Reset/Train phase will include modular units that redeploy from long- term operations and are unable to sustain ready or available capability levels. The Ready phase will include those modular units that have been assessed as ready at designated capability levels, may be mobilized if required, and can be equipped if necessary to meet operational surge requirements. The Available phase will include those modular units that have been assessed as available at designated capability levels to conduct missions. In this last phase, active units are available for immediate deployment and reserve component units are available for mobilization, training, and validation for deployment. However, this strategy is not yet complete because the Army has not yet defined specific equipping requirements for units as they progress through the force rotation model. Therefore, it is difficult to assess the risk associated with decreasing nondeploying units’ readiness to perform other missions or the ability of units in the Reset/Train and Ready phases of the force rotation model to respond to an unforeseen conflict or crisis, if required. The Army has made some progress toward meeting modular personnel requirements in the active component, but faces significant challenges in achieving its modular restructuring without permanently increasing its active component end strength above 482,400, as specified by the QDR. The Army plans to increase the size of its modular combat force but doing so without permanently increasing its overall end strength is an ambitious undertaking that will require the Army to eliminate or realign many positions in its noncombat force. While the Army is moving forward with its personnel reduction and realignment plans through a variety of initiatives, it is not clear to what extent the Army will be able to meet its overall end-strength goals and what risks to meeting modular force personnel requirements exist if these goals are not met. We have found that strategic workforce planning is one of the tools that can help agencies develop strategies for effectively implementing challenging initiatives. Effective strategic workforce planning includes the development of strategies to monitor and evaluate progress towards achieving goals. Without information on the status and progress of its personnel initiatives, Congress and the Secretary of Defense lack the data necessary to identify challenges, monitor progress, and effectively address problems when they arise. The Army accounts for its congressionally authorized active component personnel end strength in three broad categories—the operational combat force, the institutional noncombat force, and personnel who are temporarily unavailable for assignment. The operational combat force consists of personnel who are assigned to deployable combat, combat support, and combat service support units; these include modular combat brigades and their supporting units such as logistics, medical, and administrative units. The Army’s institutional noncombat force consists of personnel assigned to support and training command and headquarters units, which primarily provide management, administrative, training, and other support, and typically are not deployed for combat operations. This includes personnel assigned to the Department of the Army headquarters and major commands such as the Training and Doctrine Command. In addition, the Army separately accounts for personnel who are temporarily unavailable for their official duties, including personnel who are in transit between assignments, are temporarily not available for assignment because of sickness or injury, or are students undergoing training away from their units. The Army refers to these personnel as transients, transfers, holdees, and students. The Army plans to reduce its current temporary end-strength authorization of 512,400 to 482,400 by 2011 in order to help fund the Army’s priority programs. Simultaneously, the Army plans to increase the number of soldiers in its operational combat force from its previous level of approximately 315,000 to 355,000 in order to meet the increased personnel requirements of its new larger modular force structure. The Army plans to utilize several initiatives to reduce and realign the Army with the aim of meeting these planned personnel levels. For example, the Army has converted some noncombat military positions into civilian positions, thereby freeing up soldiers to fill modular combat brigades’ requirements. During fiscal year 2005, the Army converted approximately 8,000 military positions to civilian-staffed positions within the Army’s noncombat force. However, Army officials believe additional conversions to achieve the 19,000 planned reductions in the noncombat force will be significantly more challenging to achieve. In addition to its success with the military-to- civilian conversions, the Army has been given statutory authority to reduce active personnel support to the National Guard and reserve by 1,500. However, the Army must still eliminate additional positions, including reducing transients, transfers, holdees, and student personnel utilizing these and other initiatives, so it can reduce its overall end strength while filling requirements for modular units. As shown in table 4, the Army’s goal is to reduce overall active component end strength from the current temporary authorization level while increasing the size of its operational combat force. While the Army is attempting to reduce end strength in its noncombat force and realign positions to the combat force via several initiatives, it may have difficulty meeting its expectations for some initiatives. For example, the Army expected that the Base Realignment and Closure (BRAC) decisions of 2005 could free up approximately 2,000 to 3,000 positions in its noncombat force, but the Army is revisiting this assumption based upon updated manpower levels at the commands and installations approved for closure and consolidation. Army officials believe they will be able to realign some positions from BRAC, but it is not clear whether the reductions will free up 2,000 to 3,000 military personnel that can be reassigned to modular combat units. In the same vein, Army officials expected to see reductions of several hundred base support staff resulting from restationing forces currently overseas back to garrisons within the United States. However, Army officials are still attempting to determine if the actual savings will meet the original assumptions. As a result, it is not clear to what extent the Army will be able to meet its overall end-strength goals and what risks exist if these goals are not met. Furthermore, the Army will face challenges in meeting its new modular force requirements for military intelligence specialists. The Army’s new modular force structure significantly increases requirements for military intelligence specialists. In late 2005, Army intelligence officials told us that the modular force would require approximately 8,400 additional active component intelligence specialist positions, but the Army planned to fill only about 57 percent of these positions by 2013, in part because of efforts to reduce overall end strength. In May 2006, Army officials told us that the Army had completed its most recent Total Army Analysis (for fiscal years 2008–2013), which balances Army requirements within a projected end- strength authorization of 482,400. Accordingly, the Army revised its earlier estimate of intelligence specialist position requirements and determined that its increased active component requirement for intelligence specialists was only 5,600 and that it planned to fill all of these positions by 2013. However, Army officials acknowledge that meeting modular force requirements for intelligence specialists is a significant challenge because it will take a number of years to recruit and train intelligence soldiers. According to Army intelligence officials, intelligence capability has improved over that of the previous force; however, any shortfalls in filling intelligence requirements would further stress intelligence specialists with a high pace of deployments. Since intelligence is considered a key enabler of the modular design—a component of the new design’s improved situational awareness—it is unclear to what extent any shortages in planned intelligence capacity will affect the overall capability of modular combat brigades. Without continued, significant progress in meeting personnel requirements, the Army may need to accept increased risk in its ability to conduct operations and support its combat forces or it may need to seek support for an end-strength increase from DOD and Congress. While the Army has established overall objectives and time frames for modularity, it lacks a long-term comprehensive and transparent approach to effectively measure its progress against stated modularity objectives, assess the need for further changes to its modular unit designs, and monitor implementation plans. A comprehensive approach includes performance measures and a plan to test changes to the design of the modular combat brigades. The Army has not developed a comprehensive approach because senior leadership has focused attention on developing broad guidance and unit conversion plans for modularity while focusing less attention on developing ways to measure results. Without such an approach, neither the Secretary of Defense nor Congress will have full visibility into the capabilities of the modular force and the Army’s implementation plans. While the Army has identified objectives for modularity, it has not developed modular-specific quantifiable goals or performance metrics to measure its progress. GAO and DOD, among others, have identified the importance of establishing objectives that can be translated into measurable, results-oriented metrics, which in turn provide accountability for results. In a 2003 report we found that the adoption of a results- oriented framework that clearly establishes performance goals and measures progress toward those goals was a key practice for implementing a successful transformation. DOD has also recognized the need to develop or refine metrics so it can measure efforts to implement the defense strategy and provide useful information to senior leadership. The Army considers the Army Campaign Plan to be a key document guiding the modular restructuring. The plan provides broad guidelines for modularity and other program tasks across the entire Army. However, modularity-related metrics within the plan are limited to a schedule for creating modular units and an associated metric of achieving unit readiness goals for equipment, training, and personnel by certain dates after unit creation. Moreover, a 2005 assessment by the Office of Management and Budget identified the total number of brigades created as the only metric the Army had developed for measuring the success of its modularity initiative. Another key planning document, the 2005 Army Strategic Planning Guidance, identified several major expected advantages of modularity, including an increase in the combat power of the active component force by at least 30 percent, an increase in the rotational pool of ready units by at least 50 percent, the creation of a deployable joint- capable headquarters, the development of a force design upon which the future network-centric developments can be readily applied, and reduced stress on the force through a more predictable deployment cycle. However, these goals have not translated into outcome-related metrics that are reported to provide decision makers a clear status of the modular restructuring as a whole. Army officials stated that unit-creation schedules and readiness levels are the best available metrics for assessing modularity progress because modularity is a reorganization encompassing hundreds of individual procurement programs that would be difficult to collectively assess in a modularity context. However, we believe that results-oriented performance measures with specific, objective indicators used to measure progress toward achieving goals are essential for restructuring organizations. A major Air Force transformation initiative may provide insights on how the Army could develop performance metrics for a widespread transformation of a military force. In 1998, the Air Force adopted the Expeditionary Aerospace Force Concept as a way to help manage its deployments and commitments to theater commanders and reduce the deployment burden on its people. Like the Army’s modular restructuring, the Air Force’s restructuring was fundamental to the force, and according to the Air Force, represented the largest transformation of its processes since before the Cold War. In our 2000 report, we found that the Air Force expected to achieve important benefits from the Expeditionary Concept, but had yet to establish specific quantifiable goals for those benefits, which included increasing the level of deployment predictability for individual service members. We recommended that the Air Force develop specific quantifiable goals based on the Expeditionary Concept’s broad objectives, and establish needed metrics to measure progress toward these goals. In a January 2001 report to Congress on the Expeditionary Aerospace Force Implementation, the Air Force identified 13 metrics to measure progress in six performance areas. For example, to better balance deployment taskings in order to provide relief to heavily tasked units, the Air Force developed 4 metrics, including one that measures active duty personnel available to meet Expeditionary Force requirements. The Air Force described each metric and assigned either a quantitative goal (such as a percentage) or a trend goal indicating the desired direction the metric should be moving over time. These results were briefed regularly to the Air Force Chief of Staff. The Army’s transformation is more extensive than the Air Force’s in that the Air Force did not change traditional command and organizational structures under its Expeditionary Concept, while the Army modular force has made extensive changes to these structures, and the Air Force did not plan for nearly the same implementation costs as the Army. Nonetheless, we believe some of the goals and challenges faced by the Air Force that we reported in August 2000 may have relevance to the Army today. While we recognize the complexity of the Army’s modular restructuring, without clear definitions of metrics, and periodic communication of performance against these metrics, the Secretary of Defense and Congress will have difficulty assessing the impact of refinements and enhancements to the modular design—such as DOD’s recent decision to reduce the number of modular combat and support brigades reported in the QDR, as well as any changes in resources available to meet modular design requirements. Since 2004, when the Army approved the original designs for its modular brigades, it has made some refinements to those designs but does not have a comprehensive plan for evaluating the effect of these design changes or the need for additional design changes as the Army gets more operational experience using modular brigades and integrating command and control headquarters, combat support units, and combat brigades. In fiscal year 2004, TRADOC’s Analysis Center concluded that the modular combat brigade designs would be more capable than division-based units based on an integrated and iterative analysis employing computer-assisted exercises, subject matter experts, and senior observers. This analysis culminated in the approval of modular brigade-based designs for the Army. The assessment employed performance metrics such as mission accomplishment, units’ organic lethality, and survivability, and compared the performance of variations on modular unit designs against the existing division-based designs. The report emphasized that the Chief of Staff of the Army had asked for “good enough” prototype designs that could be quickly implemented, and the modular organizations assessed were not the end of the development effort. Since these initial design assessments, the Army has been assessing implementation and making further adjustments in designs and implementation plans through a number of venues, to include unit readiness reporting on personnel, equipment, and training; modular force coordination cells to assist units in the conversion process; modular force observation teams to collect lessons during training; and collection and analysis teams to assess units’ effectiveness during deployment. Based on data collected and analyzed through these processes, TRADOC has approved some design change recommendations and has not approved others. For example, TRADOC analyzed a Department of the Army proposal to reduce the number of Long-Range Advanced Scout Surveillance Systems, but recommended retaining the higher number in the existing design in part because of decreases in units’ assessed lethality and survivability with the reduced number of surveillance systems. Army officials maintain that ongoing assessments described above provide sufficient validation that the modularity concept works in practice. However, these assessments do not provide a comprehensive evaluation of the modular designs. Further, the Army does not plan to conduct a similar overarching analysis to assess the modular force capabilities to perform operations across the full spectrum of potential conflict. In November 2005, we reported that methodically testing, exercising, and evaluating new doctrines and concepts is an important and established practice throughout the military, and that particularly large and complex issues may require long-term testing and evaluation that is guided by study plans. We believe the evolving nature of the design highlights the importance of planning for broad-based evaluations of the modular force to ensure the Army is achieving the capabilities it intended, and to provide an opportunity to make course corrections if needed. For example, one controversial element of the design was the decision to include two maneuver battalions instead of three in the modular combat brigades. TRADOC’s 2004 analysis noted that the modular combat brigade designs with the two maneuver battalion organization did not perform as well as the three maneuver battalion design, and cited this as one of the most significant areas of risk in the modular combat brigade design. Nonetheless, because of the significant additional cost of adding a third combat battalion the Army decided on a two-battalion design for the modular combat brigades that included key enabling equipment such as communications, and surveillance and reconnaissance capabilities. Some defense experts, including a current division commander and several retired Army generals, have expressed concerns about this aspect of the modular design. In addition, some of these experts have expressed concerns about whether the current designs have been sufficiently tested and whether they provide the best mix of capabilities to conduct full- spectrum operations. In addition, the Army has recently completed designs for support units and headquarters units. Once the Army gets more operational experience with the new modular units, it may find it needs to make further adjustments to its designs. Without a comprehensive testing plan, neither the Army nor congressional decision makers will be able to sufficiently assess the capabilities of the modular combat brigades as they are being organized, staffed, and equipped. The fast pace, broad scope, and cost of the Army’s effort to transform into a modular force present considerable challenges for the Army, and for Congress as well in effectively overseeing a force restructuring of this magnitude. The Army leadership has dedicated considerable attention, energy, and time to achieving its modularity goals under tight time frames. However, the lack of clarity in equipment and personnel plans raises considerable uncertainty as to whether the Army can meet its goals within acceptable risk levels. For example, until the Army defines and communicates equipment requirements for all modular units and assesses the risk associated with its plan to not equip brigades with all of their intended capabilities, it will remain unclear the extent to which its new modular combat brigades will be able to operate as stand-alone, self- sufficient units—a main goal of the Army’s modular transformation. With respect to personnel, the Army’s goal to increase its operational force while not permanently increasing its current end strength will require it to make the most efficient use of its personnel. Until the Army communicates the status of its various ongoing personnel initiatives, the Army’s ability to meet personnel requirements of its new modular force will also remain unclear. Finally, until the Army develops a long-term comprehensive approach for measuring progress and a plan for evaluating changes, it remains uncertain how the Army will determine whether it is achieving its goal of creating a more rapidly deployable, joint, expeditionary force. Without such an approach, and clearly defined and communicated plans, the Secretary of Defense and Congress will not have the information needed to weigh competing funding priorities and monitor the Army’s progress in its over $52 billion effort to transform its force. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following actions. First, in order for decision makers to better assess the Army’s strategy for equipping modular combat brigades, we recommend the Army develop and provide the Secretary of Defense and Congress with details about the Army’s equipping strategy, to include the types and quantities of equipment active component and National Guard modular units would receive in each phase of the force rotation model, and how these amounts compare to design requirements for modular units; and an assessment of the operational risk associated with this equipping strategy. Second, in order for decision makers to have the visibility needed to assess the Army’s ability to meet the personnel requirements for its new modular operational forces while simultaneously managing the risk to its noncombat forces, we recommend that the Army develop and provide the Secretary of Defense and Congress with a report on the status of its personnel initiatives, including executable milestones for realigning and reducing its noncombat forces; and an assessment of how the Army will fully staff its modular operational combat force while managing the risk to its noncombat supporting force structure. Third, to improve information available for decision makers on progress of the Army’s modular force implementation plans, we recommend that the Army develop and provide the Secretary of Defense and Congress with a comprehensive plan for assessing the Army’s progress toward achieving the benefits of modularity to include specific, quantifiable performance metrics to measure progress toward meeting the goals and objectives established in the Army Campaign Plan; and plans and milestones for conducting further evaluation of modular unit designs that discuss the extent to which unit designs provide sufficient capabilities needed to execute National Defense Strategy and 2006 QDR objectives for addressing a wider range of both traditional and irregular security challenges. Finally, the Secretary of the Army should provide a testing plan as part of its Army Campaign Plan that includes milestones for conducting comprehensive assessments of the modular force as it is being implemented so that decision makers—-both inside and outside the Army—-can assess the implications of changes to the Army force structure in terms of the goals of modular restructuring. The results of these assessments should be provided to Congress as part of the Army’s justification for its annual budget through fiscal year 2011. Given the significant cost and far-reaching magnitude of the Army’s plans for creating modular forces, Congress should consider requiring the Secretary of Defense to provide the information outlined in our recommendations including; details about the Army’s equipping strategy and an assessment of the operational risk associated with this equipping strategy; the status of the Army’s personnel initiatives and an assessment of how the Army will fully staff its modular operational combat force and manage the risk to its noncombat force structure; and the Army’s plan for assessing its progress toward achieving the benefits of modularity, plans and milestones for conducting further evaluation of modular unit designs, and a testing plan for conducting comprehensive assessments of the modular force as it is being implemented. In written comments on a draft of this report provided by the Army on behalf of DOD, the department noted that the report adequately reflects the challenges associated with transforming the Army to modular force designs while at war, but stated that the report fails to recognize ongoing efforts and accomplishments to date. (DOD’s comments are reprinted in app. II). DOD also stated that citing the views of unnamed sources regarding the modular combat brigade design does not contribute to an accurate, balanced assessment of the Army’s progress. DOD agreed or partially agreed with our recommendations to develop and provide information on its equipping strategy and personnel initiatives and to develop expanded performance metrics for assessing progress. However, DOD disagreed with three recommendations regarding the need for risk assessments and a testing plan to further assess designs for modular units. As discussed below, because of the significance, cost, scope, and potential for risk associated with the Army’s modularity initiative, we continue to believe that more transparency of the Army’s plans and risk assessments is needed in light of the limited amount of information the Army has provided to Congress. Therefore, we have included a matter for congressional consideration to require the Secretary of Defense to provide more detailed plans and assessments of modularity risks. Our specific comments follow. First, we strongly disagree with DOD’s assertion that GAO used anonymous and unverifiable sources which detracted from an accurate and balanced assessment of the Army’s progress in implementing modularity. Our analysis of the Army’s progress and potential for risk in implementing modular units is primarily based on our independent and thorough analysis of Army plans, reports, briefings, and readiness assessments, which we used to compare the Army’s goals for modularity against its actual plans for equipping and staffing modular units. We sought views on modular unit designs to supplement our analysis from a diverse group of knowledgeable people both inside and outside the Army and DOD, including Army headquarters officials, division and brigade commanders, Army officials who played key roles in developing and assessing modular unit designs, and retired generals and defense experts who have studied and written about Army transformation. Our long- standing policy is not to include the names of individuals from whom we obtained information but to use information and evidence from appropriate and relevant sources and provide balance in our report. We integrated evidence and information from all sources to reach conclusions and formulate the recommendations included in this report. Our report recognizes the Army’s progress in implementing modular units while fully engaged in ongoing operations but also identifies and provides transparency regarding a number of risks inherent in the Army’s plans so that Congress will have better information with which to make decisions on funding and oversight. The discussion we present highlighting the concerns of some current and retired senior Army officers and defense experts regarding certain aspects of modular designs is used to illustrate the need for further evaluation of modular units as they move from concept to reality—an approach consistent with DOD policy and best practice in transforming defense capabilities. DOD also stated that the report inaccurately (1) asserts that Shadow tactical unmanned aerial vehicle systems will be fielded with fewer air vehicles due to a shortage of operators and maintainers, and (2) depicts the growth of Army Intelligence positions. We disagree with DOD’s assessment. As our report clearly points out, based on documentation obtained from the Army, the Army’s approved modular combat brigade design was for seven air vehicles per Shadow system, which would provide 24-hour per day aerial surveillance, but the Army opted to field Shadow systems with four air vehicles instead, primarily because it lacks adequate numbers of air vehicle operators and maintainers. Although the Army believes that Shadow systems with four air vehicles are adequate at this time, we believe it is important to provide transparency by presenting information which shows that modular combat brigades will not have all of the capabilities intended by the original modular combat brigade designs (i.e., brigade-level 24-hour per day surveillance operations) without Shadow systems composed of seven air vehicles. With regard to the number of intelligence positions, our report accurately notes that the Army decided to increase its intelligence positions by 5,600 in the active force. However, we also note that this was a revision of an earlier higher estimate of 8,400 positions projected by Army intelligence officials. Therefore, we do not agree with the department’s comment that the report inaccurately depicts the growth of Army intelligence positions, nor do we agree with its characterization that the report inappropriately focuses on the Army’s manning challenges. We believe that it is important for the Secretary of Defense and Congress to have a clear and transparent picture of the personnel challenges the Army faces in order to fully achieve the goals of modular restructuring and make informed decisions on resources and authorized end strength. DOD agreed with our recommendation that the Army develop and provide the Secretary of Defense and Congress with details about the Army’s equipping strategy. DOD commented that the Army recently completed development of the equipping strategy for modular forces and that the Army has conducted equipping conferences to ensure that soldiers have the best equipment available as they train and deploy. We requested a copy of the Army’s recently completed equipping strategy but did not receive a copy prior to publication and therefore have not been able to assess how and to what extent it meets the intent of our recommendation. Moreover, DOD did not indicate what, if any, actions it planned to take to provide Congress with specific details about the Army’s equipping strategy, as we recommended. Therefore, we have highlighted the need for more complete information on the Army’s equipping strategy in a matter for congressional consideration. DOD disagreed with our recommendation that the Army develop and provide the Secretary of Defense and Congress with an assessment of the risk associated with the Army’s rotational equipping strategy and said in its comments that this action is already occurring on a regular basis. Although the Army is considering risk in managing existing equipment, at the time of our review the Army had not finished developing its equipping strategy for its new rotational force model. Therefore, we continue to believe that the Army needs to document and provide risk assessments to Congress based on its newly completed equipping strategy. This is particularly important given other Army priorities such as the Future Combat System and near-term equipping needs for Iraq that will compete for funding and may cause changes to the Army’s current equipping strategy for modular units. DOD partially concurred with our recommendation that the Army develop and provide the Secretary of Defense and Congress with a report on the status of its personnel initiatives. However, DOD commented that adding another report on this issue would be duplicative and irrelevant and said this action is already occurring on a regular basis. However, while Army documents present an overview of how the Army is allocating military personnel to operational and nonoperational positions, they do not provide specific information on the Army’s progress in implementing personnel initiatives. Moreover, the department’s comments did not address whether the Army plans to provide additional information to Congress. We continue to believe that such information is needed by Congress to inform their decisions on Army personnel levels. DOD disagreed with our recommendation that the Army develop and provide the Secretary of Defense and Congress with a risk assessment of how the Army will fully staff its modular operational combat force while managing the risk to its noncombat supporting force structure. DOD commented that the Army provided the Office of the Secretary of Defense with a plan for reshaping the Army, including increasing the active operating force and downsizing overall active end strength by fiscal year 2011, based on several assumptions. However, this document, which Army officials provided to us, does not highlight potential risks in executing the Army’s plan. Moreover, DOD’s comments did not address the intent of our recommendation that the Army improve transparency by providing Congress with additional information on its plans and assessment of risk. DOD partially agreed with our recommendation that the Army develop and provide the Secretary of Defense and Congress with a comprehensive plan for assessing the Army’s progress toward achieving modularity goals and said the Army will explore the development of expanded performance metrics. However, DOD stated that plans and milestones for measuring progress are unwarranted as such evaluations occur continuously. We commend DOD for agreeing to develop expanded performance metrics. However, because of the cost and magnitude of the Army’s transformation plans, we continue to believe that developing and disseminating a comprehensive and formal evaluation plan are critical for providing transparency and accountability for results. As discussed in the report, the Army is collecting some data on the performance of modular units that attend training events and deploy overseas, but lacks a long-term comprehensive and transparent approach for integrating the results of these assessments to measure overall progress. Finally, DOD disagreed with our recommendation that the Secretary of Defense direct the Secretary of the Army to provide a testing plan that includes milestones for assessing modular unit designs as they are being implemented. DOD said the Army thoroughly evaluated modular force designs and continues to evaluate all facets of modular force performance both in training and combat operations. Nevertheless, we believe that the Army needs a more transparent, long-term, and comprehensive plan for evaluating the modular designs. The Army is still early in its implementation of modular support brigades and higher echelon command and control and support units and further evaluation of these designs based on actual experience may demonstrate that design refinements are needed. Furthermore, although the Army has gained some useful operational experience with modular combat units, this experience has been limited to stability operations and irregular warfare, rather than major combat operations or other operations across the full spectrum of potential conflict. To facilitate further assessment of unit designs, we have included this issue in our matter for congressional consideration. We are sending copies of this report to the Secretary of Defense, the Undersecretary of Defense (Comptroller), and the Secretary of the Army. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 4402. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. To conduct our work for this engagement, we analyzed data, obtained and reviewed documentation, and interviewed officials from Headquarters, Department of Army; U.S. Army Training and Doctrine Command, U.S. Army Forces Command; and the U.S. Army Center for Army Analysis. We supplemented this information with visits to the first three Army divisions undergoing modular conversions—-the 3rd and 4th Infantry Divisions and the 101st Airborne Division—to gain an understanding of the Army’s modular force implementation plans and progress in organizing, staffing, and equipping active modular combat brigades. To determine the Army’s modular force organizational design requirements and supporting analysis, we analyzed Department of the Army guidance for creating modular forces, and briefings and other documents on the Army’s modular force design and analytical process from the Training and Doctrine Command’s Analysis Center. To determine the Army’s progress and plans for equipping active component modular combat brigades, we analyzed Department of Army data on selected equipment that Army analysis identified as essential for achieving the modular combat brigades’ intended capabilities. For these selected items, we calculated the Army’s equipment requirements for active component modular combat brigades by multiplying equipment requirements obtained from the Department of the Army Office of the Deputy Chief of Staff for Operations and Training (G-3) for each of the three brigade variants— heavy, light, and Stryker—by the planned number of brigades in each variant. We then compared the sum of equipment requirements in the active component to data we obtained from officials from the Department of the Army G-8 on the expected on-hand levels of equipment and assessed the reliability of the data by discussing the results with knowledgeable officials. We determined that the data used were sufficiently reliable for our objectives. We also reviewed unit readiness reports from those brigades that had completed or were in the process of completing their modular conversion as of February 2006. For our assessment of Army National Guard equipping challenges, we relied on past GAO reports and testimony. To determine the progress made and challenges to managing personnel requirements of the modular force, we reviewed documents and discussed the implications of force structure requirements with officials from the Department of Army Offices of the Deputy Chiefs of Staff for Personnel (G1) and Intelligence (G2). We also discussed key personnel-related concerns during our visits to the divisions undergoing modular conversion. To determine the Army’s strategies and plans for meeting its modular force personnel requirements without permanently increasing overall end strength, we interviewed officials from the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs and the Department of the Army Office of the Deputy Chief of Staff for Operations and Training (G3). We also reviewed the 2006 Quadrennial Defense Review as it pertained to Army personnel end strength, and the Army’s Future Year Defense Program and supplemental budget requests for fiscal years 2005 and 2006 to determine the Army’s personnel funding plans. To determine the extent to which the Army has developed an approach for assessing implementation of modularity and for further adjusting designs or implementation plans, we reviewed our prior work on assessing organizations undertaking significant reorganizations. We reviewed and analyzed the Army Campaign Plan and discussed it with officials in the Department of Army Headquarters, especially officials from the Deputy Chief of Staff for Operations and Training (G3). To analyze the Army’s approach for assessing the implementation of its modular conversion, we examined key Army planning documents and discussed objectives, performance metrics, and testing plans with appropriate officials in the Department of the Army Headquarters, and the Training and Doctrine Command’s Analysis Center. In addition, we met with a panel of retired senior Army general officers at the Association of the U.S. Army Institute of Land Warfare, Arlington, Virginia. We relied on past GAO reports assessing organizations undertaking significant reorganizations. We conducted our work from September 2004 through March 2006 in accordance with generally accepted government auditing standards. In addition to the person named above, Gwendolyn Jaffe, Assistant Director; Margaret Best; Alissa Czyz; Christopher Forys; Kevin Handley; Joah Iannotta; Harry Jobes; David Mayfield; Jason Venner; and J. Andrew Walker made major contributions to this report.
|
The Army considers its modular force transformation its most extensive restructuring since World War II. Restructuring units from a division-based force to a modular brigade-based force will require an investment of over $52 billion, including $41 billion for equipment, from fiscal year 2005 through fiscal year 2011, according to the Army. Because of broad congressional interest in this initiative, GAO prepared this report under the Comptroller General's authority and assessed (1) the Army's progress and plans for equipping modular combat brigades, (2) progress made and challenges to managing personnel requirements of the modular force, and (3) the extent to which the Army has developed an approach for assessing the results of its modular conversions and the need for further changes to designs or implementation plans. The Army is making progress in creatingactive and National Guard modular combat brigades while fully engaged in ongoing operations, but it is not meeting its equipping goals for active brigades and has not completed development of an equipping strategy for its new force rotation model. This raises uncertainty about the levels to which the modular brigades will be equipped both in the near and longer term as well as the ultimate equipping cost. The Army plans to employ a force rotation model in which units nearing deployment would receive required levels of equipment while nondeploying units would be maintained at lower readiness levels. However, because the Army has not completed key details of the equipping strategy--such as defining the specific equipping requirements for units in various phases of its force rotation model--it is unclear what level of equipment units will have, how this strategy may affect the Army's equipment funding plans, and how well units with low priority for equipment will be able to respond to unforeseen crises. While the Army has several initiatives under way to meet its modular force personnel requirements in the active component, it faces challenges in achieving its modular restructuring without permanently increasing its active component end strength above 482,400, as specified by the 2006 Quadrennial Defense Review. The Army plans to increase its active combat force but doing so without permanently increasing its overall active end strength will require the Army to eliminate or realign many positions in its noncombat force. The Army has made some progress in reducing military personnel in noncombat positions by converting some to civilian positions and pursuing other initiatives, but Army officials believe future initiatives may be difficult to achieve and could lead to difficult trade-offs. Without information on the progress of these initiatives and what risks exist if the Army's goals are not met, Congress and the Secretary of Defense lack the information they need to understand challenges and risks. Finally, the Army does not have a comprehensive and transparent approach to measure progress against its modularity objectives, assess the need for further changes to modular designs, and monitor implementation plans. While GAO and DOD have identified the importance of establishing objectives that can be translated into measurable metrics that in turn provide accountability for results, the Army has not established outcome-related metrics linked to most of its modularity objectives. Further, although the Army is analyzing lessons learned from Iraq and training events, the Army does not have a long-term comprehensive plan for further analysis and testing of its modular combat brigade designs and fielded capabilities. Without performance metrics and a comprehensive testing plan, neither the Secretary of Defense nor Congress will have full visibility into how the modular force is currently organized, staffed, and equipped. As a result, decision makers lack sufficient information to assess the capabilities, cost, and risks of the Army's modular force implementation plans.
|
For almost two decades, we have reported on pervasive and long-standing weaknesses in DOD’s business operations. In January 2009, we released our high-risk series update for the 111th Congress. This series emphasizes federal programs and operations that are at high risk because of vulnerabilities to fraud, waste, abuse, and mismanagement and has also evolved to draw attention to areas associated with broad-based transformation needed to achieve greater efficiency, effectiveness, and sustainability. Solutions to high-risk problems offer the potential to save billions of dollars, dramatically improve service to the public, strengthen confidence and trust in the performance and accountability of the U.S. government, and ensure the ability of government to deliver on its promises. Since our high-risk program began, the government has taken these problems seriously and has made progress toward correcting them. Of the 30 high-risk areas identified by GAO across the government, DOD bears sole responsibility for 8 high-risk areas, including weapon systems acquisition, and shares responsibility for 7 other high-risk areas (see table 1). In addition to monitoring these high-risk areas, we also monitor actions that DOD has taken in response to our findings, conclusions, and recommendations. During fiscal years 2001 through 2007, we issued 637 reports to DOD that included a total of 2,726 recommendations. In December 2008, we reported to this committee on the implementation status of these recommendations and related financial accomplishments. As of October 2008, 1,682 or 62 percent of the recommendations we made were reported as were closed and implemented, 758 or 28 percent were open, and 286 or 10 percent were closed, but not implemented for a variety of reasons. Consistent with past experience that shows it takes agencies some time to implement recommendations, we found most recommendations from fiscal year 2001 have been implemented while most recommendations from fiscal year 2007 remain open. During this same period, we recorded over $89 billion in financial benefits associated with our work involving DOD. Besides financial accomplishments, our recommendations also produce many nonfinancial benefits and accomplishments, such as DOD actions taken to improve operations or management oversight. Both types of benefits result from our efforts to provide information to the Congress that helped to (1) change laws and regulations, (2) improve services to the public, and (3) promote sound agency and governmentwide management. For fiscal year 2007, 74 of our 313 recommendations to DOD were related to improving weapon system acquisition programs. In addition, for fiscal year 2007, we reported $2.6 billion in financial benefits related to weapon system acquisition programs. The financial benefits claimed result from the actions taken by Congress or DOD that are based on findings, conclusions, or recommendations contained in our products. Such actions include congressional reductions to the President’s annual budget requests, cost reductions due to greater efficiency, or cost reductions due to program cancellations or program delays. For example, the fiscal year 2007 budget request for the Army’s Future Combat System was reduced by $254 million based in part on our testimony about the program’s development risks. Over the next 5 years, DOD plans to spend more than $357 billion on the development and procurement of major defense acquisition programs. We will continue to seek to improve the efficiency and effectiveness of DOD’s weapon system investments through our work on individual programs and crosscutting areas that affect acquisition outcomes. Over the past several years our work has highlighted a number of underlying systemic causes for cost growth and schedule delays at both the strategic and program levels. At the strategic level, DOD’s processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems—which together define DOD’s overall weapon system investment strategy—are fragmented. As a result, DOD fails to effectively address joint warfighting needs and commits to more programs than it has resources for, thus creating unhealthy competition for funding. At the program level, a military service typically establishes and DOD approves a business case containing requirements that are not fully understood and cost and schedule estimates that are based on overly optimistic assumptions rather than on sufficient knowledge. Once a program begins, it too often moves forward with inadequate technology, design, testing, and manufacturing knowledge, making it impossible to successfully execute the program within established cost, schedule, and performance targets. Furthermore, DOD officials are rarely held accountable for poor decisions or poor program outcomes. At the strategic level, DOD largely continues to define warfighting needs and make investment decisions on a service-by-service and individual platform basis, using fragmented decision-making processes. This approach makes it difficult for the department to achieve a balanced mix of weapon systems that are affordable and feasible and that provide the best military value to the joint warfighter. In contrast, we have found that successful commercial enterprises use an integrated portfolio management approach to focus early investment decisions on products collectively at the enterprise level and ensure that there is a sound basis to justify the commitment of resources. By following a disciplined, integrated process—during which the relative pros and cons of competing product proposals are assessed based on strategic objectives, customer needs, and available resources, and where tough decisions about which investments to pursue and not to pursue are made—companies minimize duplication between business units, move away from organizational stovepipes, and effectively support each new development program. To be effective, integrated portfolio management must have strong, committed leadership; empowered portfolio managers; and accountability at all levels of the organization. DOD determines its capability needs through the Joint Capabilities and Integration Development System (JCIDS). While JCIDS provides a framework for reviewing and validating needs, it does not adequately prioritize those needs from a joint, departmentwide perspective and lacks the agility to meet changing warfighter demands. We recently reviewed JCIDS documentation related to new capability proposals and found that almost 70 percent were sponsored by the military services with little involvement from the joint community, including the combatant commands, which are responsible for planning and carrying out military operations. By continuing to rely on capability needs defined primarily by the services, DOD may be losing opportunities for improving joint warfighting capabilities and reducing the duplication of capabilities in some areas. The JCIDS process has also proven to be lengthy and cumbersome—taking on average up to 10 months to validate a need—thus undermining the department’s efforts to effectively respond to the needs of the warfighter, especially those needs that are near term. Furthermore, the vast majority of capability proposals that enter the JCIDS process are validated or approved without accounting for the resources or technologies that will be needed to acquire the desired capabilities. Ultimately, the process produces more demand for new weapon system programs than available resources can support. The funding of proposed programs takes place through a separate process, the department’s Planning, Programming, Budgeting, and Execution (PPBE) system, which is not fully synchronized with JCIDS. While JCIDS is a continuous, need-driven process that unfolds in response to capability proposals as they are submitted by sponsors, PPBE is a calendar-driven process comprising phases occurring over a 2-year cycle, which can lead to resource decisions for proposed programs that may occur several years later. We recently reviewed the effect of the PPBE process on major defense acquisition programs and found that the process does not produce an accurate picture of the department’s resource needs for weapon system programs. The cost of many of the programs we reviewed exceeded the funding levels planned for and reflected in the Future Years Defense Program (FYDP)—the department’s long-term investment strategy (see fig. 1). Rather than limit the number and size of programs or adjust requirements, DOD opts to push the real costs of programs to the future. With too many programs under way for the available resources and high cost growth occurring in many programs, the department must make up for funding shortfalls by shifting funds from one program to pay for another, reducing system capabilities, cutting procurement quantities, or in rare cases terminating programs. Such actions not only create instability in DOD’s weapon system portfolio, they further obscure the true future costs of current commitments, making it difficult to make informed investment decisions. At the program level, the key cause of poor outcomes is the approval of programs with business cases that contain inadequate knowledge about requirements and the resources—funding, time, technologies, and people—needed to execute them. Our work in best practices has found that an executable business case for a program demonstrated evidence that (1) the identified needs are real and necessary and that they can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources. Over the past several years, we have found no evidence of the widespread adoption of such an approach for major acquisition programs in the department. Our annual assessments of major weapon systems have consistently found that the vast majority of programs began system development without mature technologies and moved into system demonstration without design stability. The chief reason for these problems is the encouragement within the acquisition environment of overly ambitious and lengthy product developments that embody too many technical unknowns and not enough knowledge about the performance and production risks they entail. The knowledge gaps are largely the result of a lack of early and disciplined systems engineering analysis of a weapon system’s requirements prior to beginning system development. Systems engineering translates customer needs into specific product requirements for which requisite technological, software, engineering, and production capabilities can be identified through requirements analysis, design, and testing. Early systems engineering provides the knowledge a product developer needs to identify and resolve performance and resource gaps before product development begins by either reducing requirements, deferring them to the future, or increasing the estimated cost for the weapon system’s development. Because the government often does not perform the proper up-front requirements analysis to determine whether the program will meet its needs, significant contract cost increases can and do occur as the scope of the requirements changes or becomes better understood by the government and contractor. Not only does DOD not conduct disciplined systems engineering prior to the beginning of system development, it has allowed new requirements to be added well into the acquisition cycle. We have reported on the negative effect that poor systems engineering practices have had on several programs, such as the Global Hawk Unmanned Aircraft System, F-22A, Expeditionary Fighting Vehicle, and Joint Air-to-Surface Standoff Missile. With high levels of uncertainty about requirements, technologies, and design, program cost estimates and related funding needs are often understated, effectively setting programs up for cost and schedule growth. We recently assessed the service and independent cost estimates for 20 major weapon system programs and found that while the independent estimates were somewhat higher, both estimates were too low in most cases. In some of the programs we reviewed, cost estimates have been off by billions of dollars. For example, the Army’s initial cost estimate for the development of the Future Combat System (FCS) was about $20 billion, while DOD’s Cost Analysis and Improvement Group’s estimate was $27 billion. The department began the program using the $20 billion estimate, but development costs for the FCS are now estimated to be $28 billion and the program is still dealing with significant technical risk. Estimates this far off the mark do not provide the necessary foundation for sufficient funding commitments and realistic long-term planning. The programs we reviewed frequently lacked the knowledge needed to develop realistic cost estimates. For example, program Cost Analysis Requirements Description documents—used to build the program cost estimate—often lack sufficient detail about planned program content for developing sound cost estimates. Without this knowledge, cost estimators must rely heavily on parametric analysis and assumptions about system requirements, technologies, design maturity, and the time and funding needed. A cost estimate is then usually presented to decision makers as a single, or point, estimate that is expected to represent the most likely cost of the program but provides no information about the range of risk and uncertainty or level of confidence associated with the estimate. DOD’s requirements, resource allocation, and acquisition processes are led by different organizations, thus making it difficult to hold any one person or organization accountable for saying no to a proposed program or for ensuring that the department’s portfolio of programs is balanced. DOD’s 2006 Defense Acquisition Performance Assessment study observed that these processes are not connected organizationally at any level below the Deputy Secretary of Defense and concluded that this weak structure induces instability and inhibits accountability. Frequent turnover in leadership positions in the department exacerbates the problem. The average tenure, for example, of the Under Secretary of Defense for Acquisition, Technology and Logistics over the past 22 years has been only about 20 months. When DOD’s strategic processes fail to balance needs with resources and allow unsound, unexecutable programs to move forward, program managers cannot be held accountable when the programs they are handed already have a low probability of success. Program managers are also not empowered to make go or no-go decisions, have little control over funding, cannot veto new requirements, and have little authority over staffing. At the same time, program managers frequently change during a program’s development, making it difficult to hold them accountable for the business cases that they are entrusted to manage and deliver. DOD understands many of the problems that affect acquisition programs and has recently taken steps to remedy them. It has revised its acquisition policy and introduced several initiatives based in part on direction from Congress and recommendations from GAO that could provide a foundation for establishing sound, knowledge-based business cases for individual acquisition programs. However, to improve outcomes, DOD must ensure that its policy changes are consistently implemented and reflected in decisions on individual programs—not only new program starts but also ongoing programs. In the past, inconsistent implementation of existing policy has hindered DOD’s efforts to execute acquisition programs effectively. Moreover, while policy improvements are necessary, they may be insufficient unless the broader strategic issues associated with the department’s fragmented approach to managing its portfolio of weapon system investments are also addressed. In December 2008, DOD revised its policy governing major defense acquisition programs in ways intended to provide key department leaders with the knowledge needed to make informed decisions before a program starts and to maintain disciplined development once it begins. The revised policy recommends the completion of key systems engineering activities before the start of development, includes a requirement for early prototyping, and establishes review boards to evaluate the effect of potential requirements changes on ongoing programs. The policy also establishes early reviews for programs going through the pre–systems acquisition phase. In the past, DOD’s acquisition policy may have encouraged programs to rush into systems development without sufficient knowledge, in part because no formal milestone reviews were required before system development. If implemented, these policy changes could help programs replace risk with knowledge, thereby increasing the chances of developing weapon systems within cost and schedule targets while meeting user needs. As part of its strategy for enhancing the roles of program managers in major weapon system acquisitions, DOD has established a policy that requires formal agreements among program managers, their acquisition executives, and the user community setting forth common program goals. According to DOD, these agreements are intended to be binding and to detail the progress the program is expected to make during the year and the resources the program will be provided to reach these goals. DOD also requires program managers to sign tenure agreements so that their tenure will correspond to the next major milestone review closest to 4 years. DOD acknowledges that any actions taken to improve accountability must be based on a foundation whereby program managers can launch and manage programs toward successful performance, rather than focusing on maintaining support and funding for individual programs. DOD acquisition leaders have also stated that any improvements to program managers’ performance depend on the department’s ability to promote requirements and resource stability over weapon system investments. Over the past few years, DOD has also been testing portfolio management approaches in selected capability areas—command and control, net- centric operations, battlespace awareness, and logistics—to facilitate more strategic choices for resource allocation across programs. The department recently formalized the concept of capability portfolio management, issuing a directive in 2008 that established policy and assigned responsibilities for portfolio management. The directive established nine joint capability-area portfolios, each to be managed by civilian and military coleads. While the portfolios have no independent decision-making authority over requirements determination and resource allocation, according to some DOD officials, they provided key input and recommendations in this year’s budget process. However, without portfolios in which managers have authority and control over resources, the department is at risk of continuing to develop and acquire systems in a stovepiped manner and of not knowing if its systems are being developed within available resources. A broad consensus exists that weapon system problems are serious and that their resolution is overdue. With the federal budget under increasing strain from the nation’s economic crisis and long-term fiscal challenges looming, the time for change is now. Achieving successful and lasting improvements in weapon program outcomes will require changes to the overall acquisition environment and the incentives that drive it. Acquisition problems are likely to persist until DOD’s approach to managing its weapon system portfolio (1) prioritizes needs with available resources, thus eliminating unhealthy competition for funding and the incentives for making programs look affordable when they are not; (2) ensures that programs that are started can be executed by matching requirements with resources; and (3) balances the near-term needs of the joint warfighter with the long-term need to modernize the force. Establishing a single point of accountability for managing DOD’s weapon system portfolio could help the department make these changes. Congress can also support change though its own decisions about whether to authorize and appropriate funds for individual weapon programs. From an acquisition policy perspective, DOD is off to a good start with its recent policy revisions. However, DOD could do more in this regard too by requiring new programs to have manageable development cycles, requiring programs to establish knowledge-based cost and schedule estimates, and requiring contractors to perform detailed systems engineering analysis before proceeding to system development. Limiting the length of development cycles would make it easier to more accurately estimate costs, predict the future funding needs, effectively allocate resources, and hold decision makers accountable. DOD’s conventional acquisition process often requires as many as 10 or 15 years to get from program start to production as programs strive to provide revolutionary capability. Constraining cycle times to 5 or 6 years would force programs to adopt more realistic requirements and lend itself to fully funding programs to completion, thereby increasing stability and the likelihood that capability can be delivered to the warfighter within established time frames and available resources. Recently proposed acquisition reform legislation addresses some of these areas. Provisions increasing the emphasis on systems engineering, requiring early preliminary design reviews, and strengthening independent cost estimates and technology readiness assessments should make the critical front end of the acquisition process more disciplined. Establishing a termination criterion for critical cost breaches could help prevent the acceptance of unrealistic cost estimates at program initiation. Having greater combatant command involvement in determining requirements and greater consultation between the requirements, budget, and acquisition processes could help improve the department’s efforts to balance its portfolio of weapon system programs. However, while legislation and policy revisions may lead to improvements, they will not be effective without changes to the overall acquisition environment. The department has tough decisions to make about its weapon systems and portfolio, and stakeholders, including the DOD Comptroller, military services, industry, and Congress, have to play a constructive role in the process toward change. It will also require strong leadership and accountability within the department. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information about this statement, please contact Michael J. Sullivan at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Ann Borseth, Dayna Foster, Matt Lea, Susan Neill, John Oppenheim, Ken Patton, Sharon Pickup, Ron Schwenn, Charlie Shivers, Bruce Thomas, and Alyssa Weir. Defense Acquisitions: DOD Must Balance Its Needs with Available Resources and Follow an Incremental Approach to Acquiring Weapon Systems. GAO-09-413T. Washington, D.C.: March 3, 2009. Defense Acquisitions: Perspectives on Potential Changes to DOD’s Acquisition Management Framework. GAO-09-295R. Washington, D.C.: February 27, 2009. Defense Management: Actions Needed to Overcome Long-standing Challenges with Weapon Systems Acquisition and Service Contract Management. GAO-09-362T. Washington, D.C.: February 11, 2009. Status of Recommendations to the Department of Defense (Fiscal Years 2001-2007). GAO-09-201R. Washington, D.C.: December 11, 2008. Defense Acquisitions: Fundamental Changes Are Needed to Improve Weapon Program Outcomes. GAO-08-1159T. Washington, D.C.: September 25, 2008. Defense Acquisitions: DOD’s Requirements Determination Process Has Not Been Effective in Prioritizing Joint Capabilities. GAO-08-1060. Washington, D.C.: September 25, 2008. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: February 1, 2008. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 1, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Since fiscal year 2000, the Department of Defense (DOD) has significantly increased the number of major defense acquisition programs and its overall investment in them. However, acquisition outcomes have not improved. Over the next 5 years, DOD expects to invest $357 billion on the development and procurement of major defense acquisition programs and billions more on their operation and maintenance. Last year, we reported that the total acquisition cost of DOD's portfolio of major defense programs under development or in production has grown by $295 billion (in fiscal year 2008 dollars). In most cases, the programs we assessed failed to deliver capabilities when promised--often forcing warfighters to spend additional funds on maintaining legacy systems. Continued cost growth results in less funding being available for other DOD priorities and programs, while continued failure to deliver weapon systems on time delays providing critical capabilities to the warfighter. This testimony describes the systemic problems that have contributed to poor cost and schedule outcomes in DOD's acquisition of major weapon systems; recent actions DOD has taken to address these problems; and steps that Congress and DOD need to take to improve the future performance of DOD's major weapon programs. The testimony is drawn from GAO's body of work on DOD's acquisition, requirements, and funding processes. Since 1990, GAO has consistently designated DOD's management of its major weapon acquisitions as a high-risk area. A broad consensus exists that weapon system problems are serious, but efforts at reform have had limited effect. For several years, GAO's work has highlighted a number of strategic- and program-level causes for cost, schedule, and performance problems in DOD's weapon system programs. At the strategic level, DOD's processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems, which together define the department's overall weapon system investment strategy, are fragmented. As a result, DOD fails to balance the competing needs of the services with those of the joint warfighter and commits to more programs than resources can support. At the program level, DOD allows programs to begin development without a full understanding of requirements and the resources needed to execute them. The lack of early systems engineering, acceptance of unreliable cost estimates based on overly optimistic assumptions, failure to commit full funding, and the addition of new requirements well into the acquisition cycle all contribute to poor outcomes. Moreover, DOD officials are rarely held accountable for poor decisions or poor program outcomes. Recent changes to the DOD acquisition system could begin to improve weapon program outcomes. However, DOD must take additional actions to reinforce the initiatives in practice including (1) making better decisions about which programs should be pursued or not pursued given existing and expected funding; (2) developing an analytical approach to better prioritize capability needs; (3) requiring new programs to have manageable development cycles; (4) requiring programs to establish knowledge-based cost and schedule estimates; and (5) requiring contractors to perform detailed systems engineering analysis before proceeding to system development. Recently proposed acquisition reform legislation addresses some of these areas. However, while legislation and policy revisions may lead to improvements, they will not be effective without changes to the overall acquisition environment. DOD has tough decisions to make about its weapon systems portfolio, and stakeholders, including the DOD Comptroller, the military services, industry, and Congress, have to play a constructive role in the process of bringing balance to it.
|
A number of HUD’s large grant programs are subject to the AFFH requirement. These programs include the following: CDBG program: The CDBG program is authorized by Title I of the Housing and Community Development Act of 1974, as amended. The program provides annual grants to states, metropolitan areas, and urban counties to fund an extensive array of community development activities, such as providing decent housing and a suitable living environment and expanding economic opportunities that primarily benefit Americans of modest financial means. The CDBG program was funded at about $3.6 billion in fiscal year 2009, making CDBG the largest grant program, with the largest number of grantees. HOME program: The HOME program is the largest government-sponsored affordable housing production program. HOME was authorized under Title II of the Cranston-Gonzalez National Affordable Housing Act of 1990, as amended, and provides grants to states and localities, often in partnership with local nonprofit groups. These grants are used to fund a wide range of activities that build, buy, and rehabilitate affordable housing for rent or sale and provide direct rental assistance to low-income people. The program was funded at about $1.8 billion in fiscal year 2009. HOPWA program: HOPWA is intended to address the urgent housing needs of low-income Americans living with HIV/AIDs, who are disproportionately represented in low-income minority communities. HOPWA funds may be used for a wide range of housing, social services, program planning, and development costs. HOPWA funds also may be used for health care and mental health services, chemical dependency treatment, nutritional services, case management, assistance with daily living, and other supportive services. HOPWA was funded at about $310 million in fiscal year 2009. ESG program: The ESG provides homeless persons with basic shelter and essential supportive services. It also provides short-term homeless prevention assistance to persons at imminent risk of losing their own housing due to eviction, foreclosure, or utility shutoffs. The ESG was funded at about $1.7 billion, in fiscal year 2009. To help ensure that grantees receiving funds through the CDBG and other formula grant programs are meeting the AFFH requirements, HUD regulations require them to prepare and maintain AIs. HUD defines the AI as a comprehensive review of potential impediments and barriers to the right to be treated fairly when seeking housing. The AI is expected to cover public and private policies, practices, and procedures affecting housing choice and assess how they all affect the location, availability, and accessibility of housing. Grantees are also to develop strategies and actions to overcome these barriers based on history, circumstances, and experiences. In effect, the AI is a tool that is intended to serve as the basis for fair housing planning; provide essential information to policymakers, administrative staff, housing providers, lenders, and fair housing advocate; and assist in building public support for fair housing efforts. Grantees may use a portion of their CDBG and other grant funds to prepare their AIs, and AIs may be prepared by the grantees themselves or under contract with external parties, such as fair housing groups, consultants, universities, or others. While HUD regulations require grantees to prepare AIs, other requirements pertaining to these local planning documents are limited. For example, HUD has not issued regulations specifying how often grantees should update their AIs or the specific elements that should be included in them. HUD regulations also do not require grantees to submit their AIs to the department for review and approval. Instead, CDBG and HOME grantees are to annually certify to HUD that they are meeting AFFH requirements, which include having prepared an AI, taking steps to address identified impediments, and maintaining records of their actions. HUD generally accepts grantees annual AFFH certifications, including that they have prepared AIs, and will not initiate further reviews unless evidence to the contrary emerges from complaints or through the department’s routine monitoring activities. While HUD has not issued regulations that specifically define when the grantees’ must update their AIs, or what elements they must include, it has issued recommended guidance on these subjects. As discussed in this report, for example, HUD has issued guidance recommending that grantees update their AIs every 3 to 5 years. In 1996, HUD also issued a fair housing guide, which included a suggested format for AIs and other important fair housing planning elements (see table 1). The format and elements include an introduction and executive summary; jurisdictional and background data, such as demographic data and analysis; and an evaluation of the jurisdiction’s current fair housing legal status, such as a listing and description of fair housing related complaints that have been filed and their status or resolution. Further, the suggested format and other elements include a listing of the impediments identified, proposed actions and time frames to overcome them, and the signatures of top elected officials. Although HUD does not require grantees to submit their AIs to the department for review and approval, it does require them to periodically submit other reports on their overall use of CDBG and other grant program funds, such as HOME, HOPWA, and ESG. In some cases, HUD specifically requires that grantees include in these reports information about their AFFH activities. These AFFH activity reports include the following: Consolidated Plan (ConPlan): The ConPlan, which grantees must file with HUD for review and approval every 5 years, is a planning document that identifies low- and moderate-income housing needs within a community and specifies how grantees intend to use federal funds to address those needs. According to HUD, the purpose of the ConPlan is to enable grantees to shape the various housing and community development programs into effective, coordinated neighborhood and community development strategies. Within their ConPlans, grantees are to provide their AFFH certifications annually, as described here. Annual Action Plan: Grantees are required to submit an Annual Action Plan. These annual plans lay out how the grantees plan to achieve the overall objectives in their consolidated plans in the coming fiscal year. Consolidated Annual Performance and Evaluation Report (CAPER): Within 90 days at the end of the program year, grantees that have approved ConPlans must file a CAPER with HUD, which reports on the progress they have made in carrying out the activities described in their Annual Action Plans. The CAPER must include, among other things, actions taken to affirmatively further fair housing. HUD is responsible for reviewing the accuracy of the CAPER. Because HUD does not require grantees to submit AIs, the CAPER serves as the main document that department officials use to learn about grantees’ fair housing activities and accomplishments. Two HUD offices share responsibility for overseeing CDBG and HOME grantees’ compliance with AFFH requirements, including those pertaining to their AIs: the Office of Community Planning and Development (CPD) and the Office of Fair Housing and Equal Opportunity (FHEO). CPD is responsible for helping ensure that grantees are in overall compliance with CDBG and other grant program requirements. For example, CPD is responsible for ensuring that grantees spend federal funds on approved activities, such as affordable housing creation and community development. To carry out their oversight activities, CPD staff are to review and approve grantees’ ConPlans, Annual Action Plans, and CAPERs and conduct on-site monitoring reviews of a limited sample of high-risk grantees each year to assess their compliance with various program requirements, including those pertaining to AFFH. HUD has the authority to disapprove a ConPlan if the grantee’s AFFH certification is inaccurate or missing. Disapproval of a ConPlan may result in withholding CDBG and other formula grant funds until the grantee submits an adequate AFFH certification within an established time frame. While CPD serves as HUD’s main liaison with grantees, FHEO maintains final authority to determine and resolve matters involving fair housing compliance, including the AFFH requirement. In carrying out their responsibilities, FHEO staff may use the results of CPD’s oversight activities, such as its reviews of grantees’ reports and on-site monitoring reviews. FHEO staff also are to independently review grantee reports, such as their CAPERs, and may conduct on-site monitoring reviews of grantees on a limited basis. HUD maintains 10 regional offices and 81 field offices. CPD and FHEO staff are located in approximately 44 of the field offices, which have primary responsibility for monitoring and enforcing AFFH requirements, including those pertaining to AIs. Field and regional office staff generally consult with CPD and FHEO officials in HUD’s headquarters offices on key decisions and activities, including whether to disapprove a grantee’s ConPlan or AI. While we estimate that the majority of grantees have current AIs in accordance with HUD guidance, many others may be outdated per the guidance. Specifically, we estimate that 29 percent of all AIs were prepared in 2004 or earlier, including 11 percent that date from the 1990s. Because many grantees’ AIs are outdated, they may not provide a reliable basis to identify and mitigate current impediments to fair housing that may exist within their communities. We also (1) did not receive AIs from 25 grantees despite repeated requests that they provide them, which suggests that, in some cases, grantees may not maintain the documents as is required; and (2) several grantees provided documents with their status as AIs not clear due to their brevity and lack of content. While the majority of grantees may have current AIs, we question the usefulness of many such AIs as fair housing planning documents. We reviewed a subset of current AIs we received (those dating from 2005 through 2010) for a variety of reasons, including to gain insights into the types of impediments they identified and to determine whether they included the key elements identified by HUD in its 1996 fair housing guidance. The most commonly cited impediments to fair housing choice were zoning restrictions, inadequate public services in low-and moderate-income areas, lending discrimination, and a lack of public awareness about fair housing rights. Further, we found that current AIs generally contained several basic elements suggested in HUD’s guidance, such as demographic data and analysis, and recommendations to overcome identified impediments. However, a significant majority of the current AIs did not identify time frames for implementing the recommendations or contain the signatures of top elected officials as is also suggested in HUD’s guidance. As a result, these AIs may not provide a reliable basis for measuring the grantees’ progress in overcoming impediments or reasonable assurance that top elected officials endorse the recommendations in the AI and are accountable for implementing them. In sum, our review found limited assurances that grantees are placing needed emphasis on preparing AIs as effective planning tools to identify and address potential impediments to fair housing as required by statutes governing the CDBG and HOME programs and HUD regulations and guidance. We estimate that while 64 percent of all grantees have current AIs, 29 percent may be outdated having been prepared in 2004 or earlier (including 11 percent from the 1990s), and the date for 6 percent could not be determined (fig. 1). While HUD has not officially defined what constitutes an outdated AI through regulation, it has issued guidance that addresses how often an AI should be updated. Using HUD’s guidance and interviews with department officials as criteria, we define an AI as outdated if it was completed in 2004 or earlier. Specifically, HUD’s 1996 Fair Housing Planning Guide—the main reference document for grantees in developing AIs—suggests that grantees conduct or update them at least every 3 to 5 years, in part to be consistent with the consolidated planning cycle. On February 14, 2000 and again on September 2, 2004, HUD issued memorandums to all CPD and FHEO officials to remind grantees to update their AIs annually when necessary, but especially at the beginning of a new consolidated 5-year planning cycle. In addition, HUD’s 2009 study on grantees’ AI conformance concluded that the grantees with AIs dating from the 1990s may place a low priority to them and that such AIs should be updated. The incidence of outdated AIs was generally consistent across the country and among both large and small grantees. Figure 2 shows the percentage of grantees with outdated AIs in each of the geographic areas covered by HUD’s 10 regional offices, which ranged from a low of 14 percent in Region IV to a high of 45 percent in Region VII. Despite the variation, only one region had a statistically significant difference between its percentage of outdated AIs and the percentage at the national level. HUD’s 2009 internal study on AI conformance also concluded that many grantees’ AIs were outdated. Specifically, in examining the timeliness of 45 AIs in its sample, HUD found that about 18 percent (8) were produced before 2000 and had not yet been updated. While the HUD study provides some insights into the AIs, its findings cannot be generalized to the entire population because of limitations in its sampling methodology. Because many grantees’ AIs are outdated, they may not provide a reliable basis for identifying and mitigating impediments to fair housing. For example, HUD’s 1996 fair housing guidance suggests that grantees use demographic data from the U.S. Census Bureau when preparing their AIs and to update these planning documents as new census data becomes available. Updated census data could indicate demographic trends within a jurisdiction that might be useful in preparing an AI, such as whether particular areas of a jurisdiction are becoming progressively more or less segregated over time and the potential reasons thereof. Moreover, according to one FHEO field office official, grantees should update their AIs every 5 years per the guidance, because the impediments to fair housing in a particular community evolve and change, and new issues can occur on a continuing basis. Another FHEO field office official said that an AI dated from the 1990s would not be considered current under any circumstances. The official said that grantees should update their AIs periodically to adjust to the development of potential impediments to fair housing choice within their communities. For example, the official noted that, subprime mortgage lending grew substantially during the 2000s, and subprime mortgage lenders potentially disproportionately targeted minority borrowers, which resulted in many foreclosures among such groups. Without taking steps to update their AIs, whether grantees that receive federal funds through the CDBG, HOME, and other grant programs are sufficiently focused on overcoming current impediments to fair housing that may exist within their communities is unclear. We did not receive AIs from 25 grantees, despite intensive follow up efforts that included multiple e-mails and phone calls to appropriate officials. Representatives from some of these grantees offered several reasons for not providing the requested AIs. For example, representatives from two grantees said that they could not find their AIs, and a representative from another said an AI had not been prepared. Further, representatives from 8 grantees stated that they had already sent us their AIs, although we have no record of receiving them. We cannot definitively determine that all these grantees are out of compliance with statutes and HUD regulations requiring grantees to maintain AIs. However, the failure of these grantees to provide AIs, together with the results of HUD’s 2009 study that also found that some grantees did not provide AIs as requested, raises questions about whether some jurisdictions may be receiving federal funds without preparing the documents required to demonstrate that they have taken steps to affirmatively further fair housing. Our analysis of the 441 AIs we received from grantees also indicates that the documents ranged in length from several hundred pages with supporting graphs and other materials to a few pages of content. For example, one grantee’s AI contained 64 tables illustrating a wide variety of information, ranging from a breakdown of the grantee’s population by race, ethnicity, and poverty status to the rates at which low-income applicants in the grantee were denied conventional loans. While AIs may consist of many pages, length does not necessarily indicate the quality of these documents. For example, some lengthy AIs we reviewed had reports attached, including their CAPERS, that grantees were required to submit to HUD separately. On the other end of the spectrum, we identified five documents whose status as AIs was unclear based on their brevity and limited content. Specific examples are as follows: One grantee provided a four-page survey of residents within the community on fair housing issues. One grantee provided a two-page document that largely discussed its progress in implementing a local statute pertaining to community preservation and that contained two sentences describing a fair housing impediment. One grantee provided a three-page document that contained descriptions of activities designed to help the homeless and other special needs groups and described the actions that the grantee took to address barriers to affordable housing. One grantee provided a four-page description of the community itself, and it did not identify impediments to fair housing. One grantee provided a two-page e-mail that identified one impediment to fair housing choice, and in follow up conversations an official from this grantee, confirmed that the document constituted its AI. Given the brevity and lack of content in these documents, they may not constitute AIs as required by the CDBG and HOME statutes and HUD regulations. While many AIs are outdated or, in some cases, grantees may not maintain the documents as required by HUD, we did estimate that 64 percent of grantees have prepared current AIs. To gain insights into current AIs, we reviewed a subset of 60 of the 281 such documents that we received for a variety of reasons, including to determine their authorship. HUD’s 1996 guidance suggests that AIs may be authored by grantees, fair housing or industry groups, universities or colleges, or any combination thereof. Our analysis indicates that grantees, through their community development and planning offices, for example, had prepared about half of the 60 current AIs we reviewed (table 2). In 9 of the 60 cases we reviewed, the grantee had contracted with a fair housing organization; in 9 of the 60 cases with a private consulting firm; and in 4 cases with a university or college. In about 6 of the cases, the AI did not identify the author, or the author’s identity was not clear. We also reviewed 30 of the 60 current AIs to identify the types of impediments to fair housing choice that had been identified by the grantees. While these grantees cited a variety of potential impediments in their AIs, at least half identified four types of impediments (table 3): (1) zoning and site selection, (2) neighborhood revitalization, (3) lending policies and practices, and (4) fair housing informational programs. We also identified specific examples of each of these four impediments and the grantees’ planned actions to address them as described here. Zoning and site selection. One AI we reviewed, which was prepared on behalf of several grantees by a regional planning unit within a local university, identified some of their established or planned land use policies as potential impediments to fair housing. For example, the AI found that some of the grantees had minimum lot-size requirements for building single-family residences that could limit housing affordability. The AI noted that a 1-acre minimum lot size, for example, would create land costs that would make owning or renting homes on such lots unaffordable to low-income families. Further, the AI noted that some of the grantees were considering requiring that all new homes be constructed of brick, a requirement that could substantially increase construction costs compared with siding and make the homes unaffordable to low-income families. To address these potential impediments, the AI recommended that the grantees (1) ensure that a sufficient portion of their communities were zoned for multifamily construction and that lot sizes for single-family housing were small enough to keep single-family housing affordable and (2) consider the potential impact on housing affordability, including on minority families, before adopting building codes that require all-brick construction. Neighborhood revitalization, municipal and other services, employment- housing transportation linkage. One AI prepared by a private nonprofit fair housing organization on behalf of several grantees noted that the area’s transportation system was inadequate to service the needs of all residents. The AI concluded that residents who wanted or needed to use public transportation were obliged to limit their residences to the jurisdictions in which their jobs were located even if they wanted to live elsewhere. To address this impediment, the AI recommended that the grantees support a regional transportation system that not only provided services to low- and moderate-income households throughout the area but also met the needs of employers in geographic areas that were not currently served. Lending policies and practices. An AI for a large county comprising several grantees included a review of data required under the Home Mortgage Disclosure Act (HMDA). The analysis found that mortgage lenders in the jurisdiction denied the applications of upper-income black applicants at a rate that was three times higher than the rate for equally situated white applicants. Further, the AI found that the loan denial rate for Hispanic mortgage applicants was twice as high as that of equally situated white applicants. The AI recommended that the grantees contract with a consultant to prepare and conduct training for mortgage lenders to encourage their voluntary compliance with fair housing laws. Finally, the AI recommended that the grantees continue to monitor HMDA data to determine if the educational programs had a positive effect on loan denial rates for minorities. Fair housing informational programs. An AI for a county concluded that, because the grantee received few complaints from residents about fair housing, there might be a lack of public knowledge about fair housing rights and responsibilities. The AI suggested that because residents might not be aware of such issues, landlords and others involved in the real estate business could feel that they had more leeway in dealing with potential home buyers and renters. As a result, the AI recommended that the grantee should promote fair housing education through public workshops, presentations at schools and libraries, public service announcements in English and Spanish, and the distribution of fair housing literature at all county facilities and events. While current AIs may identify a variety of potential impediments to fair housing and strategies to overcome them, questions exist about the status of many such AIs as local planning documents. As part of our review, we found that the 60 current AIs generally included five of the seven key elements in the suggested format for AIs contained in HUD’s 1996 fair housing guidance (table 4). Specifically, four of these elements were present in over 55 of the 60 grantees’ AIs: jurisdictional background data, evaluation of fair housing legal status, identifications of impediments to fair housing choice, and conclusions and recommendations. The introduction and executive summary were present in 52 of the grantees’ AIs. However, we found that only 12 of the AIs included time frames for implementing recommendations for overcoming impediments and that only 8 AIs included the signatures of top elected officials. The lack of time frames for implementing proposed actions among a substantial majority—48 of the subset of 60 current AIs we reviewed—is potentially significant. HUD’s guidance on including estimated time frames for implementing recommendations is generally consistent with our view that time frames are an important component of effective strategic and other planning process. Recognizing the importance of specific time frames, officials from one HUD field office we contacted routinely provided technical advisory notices to aid grantees that were preparing or updating their AIs. These notices recommend, among other things, that grantees include benchmarks and timetables for implementing actions in their AIs. In the absence of established time frames in AIs, determining whether grantees are achieving progress in implementing recommendations to overcome identified impediments is difficult. Further, our finding that 52 of the 60 current AIs did not include the signatures of top elected officials raises questions as to whether the officials endorse the analyses and support suggested actions in the AIs and are accountable for implementing them. These questions may be particularly significant with respect to the 25 AIs identified in table 2 that were prepared by an external party under contract, such as a fair housing group or consultant, rather than by the grantee through one of its agencies. Our review indicates that none of these 25 AIs had been signed by the grantees’ top elected officials. While HUD field office officials we contacted said the lack of these signatures did not necessarily mean that the grantees did not plan to implement the actions described in the documents, other HUD officials disagreed. For example, officials from two field offices said that the lack of such signatures suggested that the grantees may not endorse the analysis and recommendations in the AIs. Officials from one of these field offices said that, in the absence of the signature of a top elected official, an AI had little value as a planning document. We note that HUD requires authorizing grantee officials to sign documents that, among other things, certify that their ConPlans identify community development and housing needs, contain specific short- and long-term objectives to address such needs, and certify that the grantee is following its department-approved plan. This is not an uncommon accountability model for compliance-based regulatory structures. Without the signatures of top elected officials, it is not clear that grantees have established plans to identify and address impediments to fair housing within their jurisdictions. We identified an example of an AI that lacked time frames for addressing impediments and had not been signed by a top elected official and that did not appear to be functioning as an effective planning document. This AI, which was prepared under contract by a fair housing group, had 15 specific recommendations, including conducting fair lending “testing” and establishing an effective code enforcement program. We contacted the CDBG representative for this grantee, who said that he believed that the AI contained just two recommendations. The official also stated that, due to other priorities, the grantee had not yet had time to implement either of these two recommendations and did not have immediate plans to do so. HUD’s 2009 study also raises questions about the usefulness of many AIs as planning documents to identify and address potential impediments to fair housing. As discussed previously, this study concluded that many of the AIs in its sample dated from the 1990s, which HUD said indicated that these grantees place a low priority on the documents. According to the HUD study, moreover, many of the AIs reviewed did not conform to the department’s guidance and appeared to have been prepared in a cursory fashion. In sum, our findings that many AIs are outdated, may not be prepared as required, or lack time frames and signatures, together with the findings of HUD’s study, raise significant questions as to whether the AI is effectively serving as a tool to help ensure that all grantees are committed to identifying and overcoming potential impediments to fair housing choice as required by statutes governing the CDBG and HOME programs and HUD regulations. HUD’s AI requirements and oversight and enforcement approaches have significant limitations that likely contribute to our findings that many such documents are outdated or contain other weaknesses. In particular, HUD’s regulations have not established standards for updating AIs or the format that they must follow, and grantees are not required to submit their AIs to the department for review. According to HUD’s 2009 internal report on AI compliance, and CPD and FHEO officials, the limited regulatory requirements pertaining to AIs and limited resources and competing priorities adversely affect the department’s capacity to help ensure the effectiveness of AIs as fair housing planning documents. Moreover, our work involving 10 HUD field offices identified specific instances that illustrate the limitations in the department’s AI oversight and enforcement approaches and the need for corrective actions. For example, we found that HUD officials rarely request grantees’ AIs during on-site monitoring reviews or receive complaints from the public about such documents, which means that the department often has minimal information about the status of grantees’ AIs in terms of their timeliness and content. Conversely, while we identified instances where certain field offices took proactive steps to help ensure the integrity of the AI process, such as one office’s efforts to better ensure that grantees update their AIs periodically, these initiatives were not common. Recognizing the limitations in its AI requirements and oversight and enforcement approaches, in 2009, HUD initiated a process to update relevant regulations, but it is not clear what issues any revised regulatory requirements will address or when they will be completed. HUD has also developed plans to address limited staffing resources that may have undermined its capacity to oversee grantees’ AIs or implement any new regulatory initiatives, but it is unclear how effective the initiatives will be. We note that some proposals that have been made, such as a requirement that grantees submit their AIs to HUD for review, would not necessarily involve a significant commitment of staff resources and could have important benefits. In the absence of a department-wide initiative to strengthen AI requirements and oversight and enforcement, many grantees may place a low priority on ensuring that their AIs serve as effective planning tools. As discussed previously, HUD’s regulatory requirements pertaining to AIs are limited. While HUD regulations require grantees to prepare AIs, they do not specify when grantees must update them or the specific format they must follow. Moreover, HUD’s regulations do not require grantees to submit their AIs to the department on a routine basis for review to help ensure their effectiveness as a tool to identify and address impediments to fair housing. Instead, pursuant to statutes governing the CDBG and HOME program, grantees are required to annually self-certify by attesting to HUD that they are in compliance with the department’s AFFH requirements, including those pertaining to the AI. Specifically, the self-certification, which is generally a one-page document, attests that the grantee has completed an AI, has taken steps to overcome the impediments identified in the AI, and maintains records of its efforts. In general, HUD officials, pursuant to department regulations, are to accept these self-certifications as sufficient evidence that the grantee has an AI and is acting to implement its recommendations. While HUD does not require grantees to submit their AIs on a routine basis, CPD and FHEO officials, who share AFFH oversight and enforcement responsibilities, use several approaches to monitor their overall compliance with AFFH requirements, including those requirements that pertain to AIs. Specifically, these approaches, while limited, can involve CPD or FHEO officials obtaining grantees’ AIs and following up as may be deemed necessary. The following efforts describe how HUD generally carries out these responsibilities: Reviews of grantee reports and plans that, among other things, are to address AFFH compliance. HUD CPD and FHEO officials are to regularly review documents that grantees annually submit on their overall plans and performance in complying with CDBG and other grant program requirements. For example, at the end of the fiscal year, grantees are required to submit their CAPER to HUD, which discusses their progress in meeting their objectives for the use of CDBG and other grant funds. As part of the CAPER, HUD requires grantees to include a description of actions taken to AFFH. If determined necessary by a HUD reviewer of either the Annual Action Plan or the CAPER, the department could request that a grantee provide its AI for review and analysis. On-site monitoring reviews to assess grantee compliance with HUD requirements, which can include reviews of AFFH documentation, such as AIs. Under HUD policy, CPD field officials are to conduct a limited number of risk-based, on-site monitoring reviews each year to assess grantees compliance with a variety of CDBG and other grant requirements. In some cases, FHEO officials may join CPD officials on these monitoring reviews or conduct independent monitoring reviews. HUD headquarters establishes annual criteria for assessing risk each year and the percentage of on-site monitoring reviews to be conducted. The criteria can include the amount of the CDBG or HOME grant, the amount of time that has passed since the last on-site review, and employee turnover in grantee offices responsible for implementation of CDBG and other grant programs. CPD officials generally were directed to visit at least 10 to 15 percent of the grantees under their jurisdiction annually. As part of these reviews, CPD may request that grantees provide copies of their most recent AIs. CPD staff or FHEO staff may review these AIs and follow up with the grantees where deemed warranted. Reviews as part of a complaint. HUD may also receive complaints about grantees’ AFFH compliance from a range of sources, including individuals, fair housing groups, or federal, state, or local agencies. In conducting investigations in response to such complaints, CPD or FHEO staff may request that grantees provide their AIs as deemed appropriate. If HUD officials identify concerns with grantees AIs through these processes, they can take several different actions. These actions include technical assistance, such as training workshops, to complete an AI; a “Special Assurance” document, which HUD may draft in order to outline a number of tasks that a grantee must do to fulfill requirements, including describing actions to overcome the effects of identified impediments, and creating a timetable for accomplishing these actions; these assurances usually are signed by the grantee’s chief elected official to signify cooperation; and withholding CDBG and HOME funding by disapproving a grantee’s ConPlan for failure to comply with requirements, including completion of an AI. According to HUD officials, this action is a last resort and rarely used. HUD’s 2009 internal study, CPD and FHEO officials in headquarters and field offices, as well as our analysis involving 10 field offices have identified significant limitations in the department’s long-standing AI requirements and oversight and enforcement approaches. HUD officials also cited staffing resource constraints as undermining their oversight capacity and ability to implement corrective measures. Regarding HUD’s 2009 study, it concluded that the department’s limited AI regulatory requirements and oversight processes contributed to the study’s findings that many AIs were outdated or otherwise did not conform with the department’ 1996 fair housing guidance. To better ensure that grantees conform with HUD guidance, the report suggested requiring grantees to submit their AIs for review and approval. The report noted that, because grantees are not currently required to submit AIs to HUD, a possible first step could simply be to implement a submission requirement. However, the report also noted that HUD would have to dedicate sufficient resources to conduct reviews of AIs and develop appropriate criteria for assessing them. The study suggested that HUD consider both (1) requiring grantees to post their AIs on the Internet and (2) compiling all submitted AIs to be posted online at a single clearing house Web site to enhance transparency and increase public awareness of the documents. Further, the study suggested that HUD update its fair housing guidance and provide additional technical assistance to grantees to help ensure they prepare more effective AIs. For example, the study suggested that HUD assist grantees in obtaining the data necessary to prepare AIs and provide relevant training. However, HUD officials said the department has not yet acted to implement the recommendations in the study. While HUD has not yet acted on the recommendations in the 2009 internal study, senior headquarters officials cited limited regulatory requirements as adversely affecting oversight efforts. In the absence of specific regulatory requirements, CPD officials said it is difficult for field offices to ensure that grantees update their AIs within specified time frames or conform to a specific format in preparing the documents, including the signatures of top elected officials. In contrast, the CPD staff noted that there are specific regulatory requirements pertaining to grantees ConPlans, Annual Action Plans, and CAPERs, including when these documents must be prepared and what must be included, which facilitates their oversight efforts. Additionally, CPD and FHEO officials in HUD’s headquarters cited limited resources and competing regulatory priorities as limiting oversight of grantees’ AIs and AFFH compliance generally and potentially posing challenges to any new regulatory initiatives. CPD officials said that obtaining and reviewing grantees’ AIs is a low priority for field office staff due to competing demands and limited resources, and additional resources and technical expertise would be required for staff to review and approve AIs as suggested in the department’s internal study. CPD and FHEO officials from the 10 HUD field offices we contacted also commented about limited AFFH regulatory requirements and oversight approaches. For example, officials from 7 of the 10 field offices told us that because grantees are not required to submit their AIs, verifying whether grantees had AIs or had updated them was difficult. Some field office staff officials also said that, because grantees are not required to submit their AIs, their capacity to assess grantees’ overall compliance with AFFH requirements is limited. For example, without requiring grantees to submit their AIs, the officials said that they could not verify whether the potential impediments to fair housing choice that may be cited in other documents, such as the grantees’ 5-year ConPlan and CAPERs, were the same impediments listed in their AIs. One field office officials suggested that HUD require grantees to submit their AIs as part of their 5-year consolidated plans to enable them to verify that the two documents were consistent. Officials from several field offices also recommended that HUD revise its regulations to require that AIs meet certain standards for timeliness and completeness, which they said would enhance their abilities to oversee and enforce the program. Further, CPD and FHEO field office officials agreed with HUD headquarters officials that declining resources and competing priorities had limited their ability to assess grantees’ AIs or AFFH compliance generally. Representatives from all of the 10 HUD offices we contacted said that their staff levels had decreased recently while their workload had increased, especially with the implementation of the American Recovery and Reinvestment Act of 2009 (Recovery Act). For example, FHEO and CPD officials in several field offices told us that they were losing staff due to retirements and promotions. One FHEO Field Office Director said that it had one official currently available to monitor 54 entitlement grantees’ compliance with all relevant statutes and regulations, including those pertaining to AFFH, within the office’s jurisdiction. Additionally, CPD and FHEO officials in one office commented that, at one time they had enough staff to regularly send both a CPD and FHEO representative on on-site monitoring reviews, but with staff reductions over the years, they are no longer able to continue this practice. Field office officials also stated that work priorities were often shifting, making it difficult for them to consistently focus on one aspect of CDBG and other grant program compliance. We identified the following specific examples that illustrate how HUD’s limited oversight and enforcement program on a nationwide basis may have contributed to many AIs being outdated or not otherwise in full conformance with department guidance, and further support the need for corrective actions to address these limitations: Our review of CAPERs for a group of grantees with outdated AIs raises questions about the value of such reports as a means of assessing AFFH compliance. As discussed previously, HUD’s annual reviews of CAPERs and other required grantee reports are a key means by which department officials assess AFFH compliance in the absence of a requirement that grantees routinely submit their AIs for review. We selected a nongeneralizable sample of 30 grantees with outdated AIs from the 441 grantees that sent us AIs. We requested that HUD provide the most recently available CAPER report for each of these 30 grantees to help us determine what information these reports contain about the grantees’ AIs, and the department provided 27 CAPERs. In 17 of the 27 cases, the grantees mentioned that they had an AI but did not specify the AI’s date. In such cases, HUD field offices that rely on CAPER reviews to help assess AFFH compliance may not be aware AIs are outdated unless they specifically follow up with the grantee to find out the date of its AI. In 10 cases, the grantees’ CAPERs disclosed the date of their AI which, in some cases, was from the 1990s. The extent to which HUD officials identify such disclosures in CAPERs or follow up on them was not clear. Field offices’ CPD on-site monitoring programs and complaint review processes provide a limited basis for assessing grantees’ AIs and taking follow up actions as may be required. We obtained data from 7 of the 10 field offices we contacted regarding the number of CPD on-site grantee reviews they conducted in 2009 and the number of times they obtained or reviewed AIs during such monitoring. As table 5 indicates, CPD field office officials reported collecting or reviewing AIs in 17 of the 88 reviews. Moreover, officials from these 7 field offices said they rarely if ever receive public complaints about a grantee’s AI. Given the absence of public complaints, which could be due to the fact that there may be a general public unawareness that grantees are required to prepare AIs, the complaint process does not appear to provide a systematic basis for HUD to identify potential limitations in grantees’ AIs and follow up with them as necessary. Our visits to field offices located in the two HUD regions with the highest incidence of outdated AIs illustrate some of the inherent limitations in the department’s oversight and enforcement processes. According to an FHEO official in one of these offices, historically, it has generally been unaware of the current state of a grantee’s AIs, because the grantees did not routinely submit them. In 2010, this official said the field office requested that all of the grantees under its jurisdiction submit their AIs, so that the office could gain a better perspective on the timeliness of the AIs. In one case, the official said that the office learned that a grantee that had been certifying its AFFH compliance for several years did not have an AI. At the other field office, we identified instances where it accepted AFFH certifications under questionable circumstances. For example, in one case, representatives from a grantee told us that they could not find its AI. When we asked field office officials about this circumstance, they said that the grantee had provided a two-page summary of an AI that it completed in 1996. Field office officials said that they viewed the summary as sufficient evidence that the grantee had completed an AI. Moreover, in distributing CDBG and other grant funds each year, this field office sends routine communications which, among other things, remind grantees to update their AIs periodically. While this is a potentially positive step, the field office does not appear to take any additional steps to ensure that AIs are not outdated. For example, we identified at least one grantee subject to the field office’s jurisdiction that has received such communications for four consecutive years, but its AI was prepared in the 1990s. While our analysis generally verified that there are limitations in HUD’s overall AI oversight and enforcement approaches that require corrective action, we identified practices in certain field offices that appear designed to better ensure that AIs are effective planning documents and that grantees fulfill overall AFFH requirements. As discussed previously, in 2010, one HUD field office we contacted independently requested that all grantees within its jurisdiction provide their AIs for review, which allowed FHEO officials to determine that many such AIs were outdated, and one grantee did not have an AI. In another example, officials from one field office said that they maintain ongoing communications with grantees to, among other things, determine the date they completed their AIs. In cases where an AI is determined to be outdated, officials told us that they work closely with the grantee through technical assistance to bring the AI up to date. We corroborated the field office’s assertion by reviewing the AIs we received from grantees under its jurisdiction and found that all of the grantees had sent us updated AIs. Another field office has established procedures to use special assurance agreements, which were discussed previously, to help ensure that grantees revise AIs that may have identified deficiencies. However, these initiatives by individual field offices appear to be isolated examples within HUD’s general approach to AI and AFFH oversight, which provides limited assurances that AIs serve as effective planning tools to identify and address impediments to fair housing. Recognizing the limitations in AI and AFFH requirements and its oversight processes, in 2009, HUD initiated a process to review and revise its existing regulations. In 2009, HUD officials held several “listening sessions” with key stakeholders, such as grantees and fair housing groups, to help identify approaches to enhance the AI and AFFH processes. In January 2010, HUD’s Assistant Secretary for Fair Housing and Equal Opportunity testified that the department was working on a proposed regulation to enhance AFFH compliance. According to a senior HUD attorney, revising the AFFH regulation is a priority for HUD, and the proposed rule may cover a variety of topics, including enhancements to the guidance provided to grantees on preparing AIs and improvements in the department’s oversight and enforcement approaches. However, until the rule is proposed, it is not clear what topics it will address. The HUD attorney also said that the department’s tentative time frame for publishing a proposed rule to revise AFFH requirements is in December of 2010. However, the attorney also said that this proposed time frame had not been finalized and was subject to change. HUD has also established initiatives to help address staffing limitations that, as discussed previously, may have affected its overall CDBG and other grant program oversight and enforcement approaches, including those pertaining to AIs and AFFH requirements, as well as its capacity to implement any new regulatory initiatives. For example, on March 26, 2010, the HUD Secretary sent out a memorandum on the agency’s Targeted Recruitment Strategy for fiscal years 2010-2012. In this document, the Secretary described a strategy for addressing HUD’s need to identify qualified individuals for its talent pipeline over the next three fiscal years. The Secretary stated that this strategy would incorporate the utilization of various federal programs that are designed to recruit and retain students to positions in the federal government, such as the Presidential Management Fellows Program, the Student Career Experience Program; and the Student Temporary Employment Program. During our review, a CPD official said that the office recently announced a buyout for certain officials and that CPD was “moving more aggressively to recruit and hire approximately 50 new employees in the next 3 months with the skills to provide grant oversight, assess grantee and community needs,” among other activities. While the effects of these plans and initiatives remain to be seen, we note that some of the proposals to enhance HUD’s AI and AFFH oversight and enforcement approaches would not necessarily involve a significant commitment of additional staff resources. In particular, requiring grantees to submit their AIs for review, without necessarily approving them, would allow CPD and FHEO officials to perform a variety of basic tasks to better ensure their quality. These tasks could include verifying whether AIs (1) have been prepared as required, (2) updated in accordance with HUD guidance, (3) include all elements suggested in the 1996 fair housing guidance, and (4) are consistent with AFFH discussions in other key documents, particularly CAPERs. If HUD officials identified any areas of concern with grantees’ AIs through such analysis, they could follow up as necessary, through technical assistance, enforcement actions, or other activities as may be necessary, to better ensure that AI serve as effective tools to identify and overcome impediments to fair housing. Moreover, the resource demands could also be mitigated if grantees submitted their AIs on a periodic basis over a period of time rather than all at once within a specified period. While HUD regulations have required the preparation of AIs for many years, whether they serve as an effective tool for grantees that receive federal funds through the CDBG and other programs to identify and address impediments to fair housing within their jurisdictions is unclear. We estimate that 29 percent of all AIs are outdated, including 11 percent that were prepared in the 1990s. Given that many AIs are outdated, they do not likely serve as effective planning documents to identify and address current potential impediments to fair housing choice. Moreover, some grantees may not prepare AIs, and others sent us cursory documentation as their AIs which, on the basis of their content, do not appear to be AIs. While we estimate that 64 percent of grantees have prepared current AIs, the usefulness of many such AIs as planning documents is uncertain. Our review of a subset of 60 current AIs indicates that, while many of them identify potential impediments to fair housing choice and contain recommendations to overcome them, the vast majority also lack time frames for implementing identified recommendations or the signatures of top elected officials, both of which are necessary to establish clear accountability to carrying out the AFFH intent. Without time frames, judging a grantees’ progress in overcoming identified impediments is difficult and, without the signatures of top elected officials, determining whether responsible officials endorse the recommendations in the AIs and are accountable for ensuring their implementation is unclear. Absent any changes in the AI process, they will likely continue to add limited value going forward in terms of eliminating potential impediments to fair housing that may exist across the country. HUD’s limited approach to establishing AI regulatory requirements, and its limited oversight and enforcement approaches, may help explain the various weaknesses in the documents that we have identified. Beyond requiring grantees to prepare AIs, and certify annually that they have done so and are addressing identified impediments, HUD requirements with respect to AIs are minimal. Specifically, grantees are not required through regulation to update their AIs periodically, include certain information, follow a specific format in preparing AIs, or submit them to HUD for review. These limitations are not new or unknown to HUD officials, yet little progress has been made to address them. While HUD officials said that the department is working on a regulation to enhance grantees’ compliance with AFFH requirements since 2009, what the regulation will ultimately entail, or when it will be completed is unclear. In the meantime, grantees will continue to have considerable flexibility in determining when to update their AIs and what information to include in them, which could lead to continued weaknesses in these fair housing planning documents. We recognize that HUD faces resource challenges and competing priorities in carrying out its overall CDBG and other grant program responsibilities, including those pertaining to AFFH and AIs. However, depending on how any changes are structured, resources could be better leveraged to provide more coverage for overseeing grantees. For example, while HUD officials expressed concerns about the resources and technical expertise necessary to approve AIs, a grantee submission requirement itself could have several significant benefits without necessarily involving a significant commitment of staff resources. Specifically, a submission requirement could allow HUD staff to verify basic items, such as whether grantees have prepared AIs as required, and whether such AIs have been updated and conform to an established format, and are consistent with other critical reports, such as CAPERs. Moreover, a submission requirement would provide enhanced incentives for grantees to better ensure that their AIs serve as effective planning tools to identify potential impediments to fair housing and to overcome them. Failure to require that grantees submit their AIs on a regular basis will likely continue to result in many grantees not updating the documents in a timely manner or adhering to any guidance or requirements. To better ensure that grantees’ AIs serve as an effective tool for grantees to identify and address impediments to fair housing, we recommend that HUD expeditiously complete its new regulation pertaining to the AFFH requirements. In so doing, we also recommend that HUD address three existing limitations. First, we recommend that HUD establish standards for grantees to follow in updating their AIs and the format that they should follow in preparing the documents. Second, to facilitate efforts to measure grantees’ progress in addressing identified impediments to fair housing and to help ensure transparency and accountability, we recommend, as part of the AI format, HUD require grantees to include time frames for implementing recommendations and the signatures of responsible officials. And finally, we recommend HUD require, at a minimum, that grantees submit their AIs to the department on a routine basis and that HUD staff verify the timeliness of the documents, determine whether they adhere to established format requirements, assess the progress that grantees are achieving in addressing identified impediments, and help ensure the consistency between the AIs and other required grantee reports, such as the CAPERs. We provided a draft of this report to HUD for its review and comment. We received written comments from HUD’s Assistant Secretary for Fair Housing and Equal Opportunity, which are reprinted in appendix II. HUD also provided technical comments, which we have incorporated as appropriate. In its written comments, HUD highlighted its recent actions to affirmatively further fair housing. Among the recent initiatives to enhance its AFFH compliance and oversight cited in HUD’s written comments were the following: HUD’s renewed commitment to AFFH. In fiscal year 2010, HUD said it had strengthened and clarified the AFFH requirements for grantees that are not specifically exempt in the FY 2010 Notice of Funding Availability and General Section. According to HUD, to meet this requirement, applicants for funding through these programs must now address how their proposed activities will help overcome impediments to fair housing choice as outlined in relevant AIs. HUD stated that by establishing AFFH as a policy priority within HUD’s discretionary funding programs, will allow the department to encourage grantees to undertake comprehensive and innovative strategies to affirmatively further fair housing. For example, HUD may take steps to reward grantees for participating in regional efforts to promote integration or decreasing the concentration of poverty. Increasing the level of AFFH review and technical assistance. According to HUD, since the beginning of fiscal year 2010, FHEO regional and field offices have increased the level review of grantee’s AIs within their jurisdictions. The letter also stated that some offices have requested grantees to submit their AIs whereas other offices are doing so on a risk basis. Moreover, HUD stated that, since January 2010, it has increased its training of grantees regarding their AFFH compliance requirement. This training covers the AI and the importance of its completion, what information should be included in the AI including race and ethnicity data, who should complete the AI, the consequences of not completing the AI, and how to report fair housing activities. Improving HUD’s capacity for monitoring and enforcing AFFH compliance. HUD stated that it is exploring ways to find greater efficiencies and reduce staff time spent on routine administrative matters and to dedicate more time to AFFH oversight. This is to include additional training for HUD on AFFH oversight. Specifically, HUD said it is designing training at its National Fair Housing Academy on reviewing submissions to better ensure consistent and valid review criteria. HUD said it is also developing uniform standards for its staff of review of grantees’ Annual Action Plans and Consolidated Plans for compliance with AFFH certifications. While we commend HUD for recognizing the need to take steps to improve its oversight of AFFH compliance, many of the key challenges we found in our report do not appear to be addressed by its current plans. Specifically, we note that HUD did not address the status of its planned AFFH rulemaking efforts, including standards for grantees to follow in updating their AIs and the format that they should follow in preparing the documents, such as including the time frames for implementing recommendations and the signature of responsible officials. Further, HUD did not discuss any plans to require, at a minimum, that grantees submit their AIs to the department on a routine basis to help ensure grantees compliance with requirements and guidance pertaining to these documents. In the absence of such regulatory requirements, the usefulness of requiring AIs as a tool to affirmatively further fair housing is diminished. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, to the Secretary of Housing and Urban Development, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of our report are to (1) assesses both the conformance of Community Development Block Grant (CDBG) and HOME Investment Partnerships Program (HOME) grantees’ Analysis of Impediments (AI) with Department of Housing and Urban Development (HUD) guidance pertaining to their timeliness and content as well as the AIs’ potential usefulness as fair housing planning tools and (2) identify factors that may help explain any potential weaknesses in grantees AIs, particularly factors related to HUD’s regulatory requirements and oversight and enforcement approaches. To address the first objective, we made a document request of a representative sample of CDBG and HOME grantees asking that they submit their most recent AI to us. Although there are four HUD formula grant programs to which the AFFH documentation requirements apply, our work focused on CDBG and HOME, the two largest of such programs as measured by grant amount. Prior to launching the AI document request, we obtained contact information on CDBG and HOME grantees from HUD’s fiscal year 2009 CDBG program contacts Web site. We verified that the most up-to-date contact information was the HUD’s Office of Community Planning and Development (CPD) Community Connections, the clearinghouse of information for CPD. From the total population of 1,209 fiscal year 2009 program participants on HUD’s Web site, which includes all 1,209 CDBG grantees and 97 percent (634 of 650) of all HOME grantees, we selected a random sample of 473 CDBG and HOME grantees. Using a two-way stratification, we stratified the population by HUD’s 10 regions and grantee’s grant size (less than $500,000 and $500,000 or more). We independently selected a random sample of 48 grantees from each of 10 HUD regions (with the exception of Region VII where there were only 41 grantees in the population). In January 2010, we sent out an initial e-mail to all 473 identified officials and requested that grantees provide their most recent AIs to us. To ensure a high response rate, we e-mailed follow-up requests to nonrespondents approximately 2 and 4 weeks after the initial e-mail. As a result of this follow-up, we learned that 7 grantees in our initial sample were out of the scope of our study and subsequently excluded them, thereby reducing our sample to 466 grantees. We then conducted intensive follow up with the remaining nonrespondents, making repeated attempts to acquire the requested AIs through multiple phone calls and e-mails conducted by contractors hired specifically for this phase of the document request effort. Despite repeated attempts to follow up with nonrespondents, 25 grantees did not submit an AI (see table 6 for rationales provided by officials from these 25 grantees). Upon conclusion of the document request effort, we received AIs from 441 grantees for a response rate of 95 percent. Following is a summary of information from which we obtained the response rate: Total population of AI: 1,209 Number of nonrespondents: 25 Estimated in-scope total population: 1,190 Response rate: 94.6 percent. We conducted this AI request from January to March 2010. To estimate the percentage of grantees with outdated and current AIs, the sample data were weighted to make them representative of the population of grantees from which the sample is drawn. Our sample is stratified by region (10 HUD regions) and grant size (less than $500,000 and $500,000 or more in fiscal year 2009), with equal numbers of grantees being selected from each of 10 HUD regions. Since in our sample the probability of a grantee being selected varied by stratum, we assigned different weights, or sampling weights, to grantees in different strata when estimating population statistics (percentages) for the combined groups. where, wh denotes the weight for the stratum (h=1, 2, …,20); Nh denotes the population for the hth stratum; and nh denotes the total number of survey responses for the hth stratum. We calculated the ratio estimate of the overall population as: R = (Σw Σ yhi) / (Σw Σ xhi) where, wh denotes the sample weight for the hth stratum; yhi represents the ith response of the variable y response the in the hth stratum; xhi represents the ith response of the variable x in the hth stratum; and R denotes a population estimate of the ratio. To assess the precision of our estimates, we calculated 95 percent confidence intervals for each measure. Calculated from sample data, a confidence interval gives an estimated range of values that is likely to include the true measure of the population. For the estimated percentage of outdated AIs, we calculated a lower and upper bound at the 95 percent confidence level (there is a 95 percent probability that the actual percentage falls within the lower and upper bounds) of grantees by HUD region and by grant size category using raw data and the appropriate sampling weights. We used the standard errors of the estimates to calculate whether any differences between the grantees by region and grant size were statistically significant at the 95 percent confidence level. To evaluate the timeliness of AIs, we relied on HUD criteria, including the 1996, 2000, and 2004 guidance that recommended that they be updated every 3 to 5 years, and annually as necessary, and the findings of a 2009 internal HUD study on AI compliance, which concluded that AIs completed in the 1990s are outdated. Specifically, using a data collection instrument (DCI), we systematically noted the publication dates of all 441 AIs and also noted if no date was mentioned in the AI. We also collected information on the author’s name, if available, and the number of pages. Based on this, we categorized 64 percent AIs as current. Further, to gain more information on the contents of AIs categorized as current (AIs completed from 2005 through 2010), we reviewed a nonrepresentative subset of 60 current AIs to determine the extent to which they contained sections that the HUD guidance suggests to include in AIs: (1) executive summary/introduction, (2) grantee’s background data, (3) current fair housing legal status, (4) identified impediments, (5) recommendations, (6) time frames, and (7) signatures by chief elected officials. We chose the subset on a weighted geographic basis to ensure that it was reflective of the geographic distribution of the current AIs and reflected grantees in terms of the distribution of fiscal year 2009 grant sizes within region. We selected between five and seven grantees per HUD region weighted slightly more toward grantees with larger grants because they were more prevalent among the grantees with current AIs. As such, per region, we selected between one and three grantees with grant sizes less than $500,000 in fiscal year 2009 and between three and four grantees with grant sizes of $500,000 or more. While we took steps to help ensure that the 60 current AIs were reflective of the diversity in content in such documents by basing the selection on such factors as the grantees’ geographic location and grant size, they are not representative of either all current AIs in our sample or of all current AIs generally. Finally, we reviewed 30 of the nonrepresentative subset of 60 current AIs to identify the types of potential impediments to fair housing choice that are commonly identified in such documents and to provide specific examples of such impediments. We selected a subset of 30 of the 60 current AIs and restricted our analysis to impediments summarized only in one of three possible sections of the AI: introduction, executive summary, or conclusion. To generate the sample of 30 current AIs from the larger subset of 60, we selected 3 current AIs from each region, 1 from small grantees (fiscal year 2009 grant amounts of less than $500,000), and 2 from large grantees since this weighting is reflective both of the overall distribution of grantees and of those with current AIs in which large grantees are more prevalent. During the course of our analysis, 5 of the original 30 were replaced with AIs from other similar grantees (based on regions and grant size) from the subset of 60 AIs because they did not list impediments in any of the three sections. While we took steps to ensure that these 30 AIs were reflective of the diversity of the population of such documents, they are not representative of current AIs in our sample or of all AIs. Qualitative analyses were conducted to identify and code impediments listed in the 30 AIs. One GAO analyst identified the impediments described in either the introduction, executive summary, or conclusion of the AI and coded them from 1 to 13 using the types of possible impediments to fair housing choice described in HUD’s 1996 Fair Housing Planning Guide. A second analyst independently verified them by reviewing the codes assigned by the first reviewer and then either indicating agreement with the first reviewer’s codes or assigning a different code for later discussion with the first reviewer. If disagreements occurred, the GAO analysts discussed their differences and came to an agreement. Each AI contained a list of multiple impediments that were usually coded into one category each. Sometimes, however, individual impediments were coded into more than one category or multiple impediments within one AI were coded into the same category. Then, we compared the findings of our analysis of AIs’ timeliness and assessment of the contents of AIs categorized as current with the results of HUD’s 2009 AI study. We interviewed HUD officials from both headquarters and 10 field offices to gather their views on grantee’s compliance to the affirmatively further fair housing (AFFH) requirement. To address the second objective, we reviewed and analyzed HUD’s policies, procedures, and guidance for overseeing and enforcing the AFFH requirement, particularly pertaining to AIs, as well as gathering information on staff resource levels for doing this. We gathered information from select field offices on how they interpret and implement existing AFFH regulations and guidance and conducted a limited review of annual reports that are required by grantees for submission to HUD. Additionally, we obtained and reviewed data from 7 of the 10 field offices we contacted on the number of times CPD staff obtained and/or reviewed AIs during on-site grantee monitoring reviews in 2009. The other 3 field offices did not provide the data as requested. To assess the extent to which HUD’s general processes are sufficient in their design and implementation to help ensure grantees’ compliance with AFFH documentation requirements, we reviewed the 2009 internal HUD study on AI compliance and oversight, obtained a senior HUD official’s public testimony on the issue, and interviewed HUD officials at the HUD headquarters and officials in 3 of 10 regional offices and 7 of 81 field offices. We selected offices in a way that emphasized geographic diversity and the representation of jurisdictions with a large number of grantees or at greater risk of noncompliance as measured by the estimated incidence of grantees having submitted outdated or no AIs. During these site visits, we conducted a file review of compliance documents for grantees that were on file and met with several officials to discuss current enforcement and oversight activities, as well as the potential limitations to enforcing and overseeing AFFH activities. To assess the usefulness of required AFFH documents for supervising grantee compliance for objective 2, we reviewed other required AFFH documents including 30 Consolidated Annual Performance and Evaluation Reports (CAPER) that grantees are required to submit annually to lay out grantees’ progress in meeting their objectives for the use of CDBG and other funds. The purpose of the limited CAPER review was to determine the extent to which the CAPER included information about the status of the AI document, including the AI date, if any, the depth of discussion regarding the AI in the CAPER, and other information related to the timeliness of the AI. Specifically, we randomly drew a nongeneralizable sample of 30 grantees from the subset of 71 outdated AIs completed prior to 2003 in the original sample of 473 grantees and contacted HUD to obtain the latest CAPER for these 30 grantees. The subset was chosen on a weighted geographic basis to reflect the 10 HUD regions and size of CDBG/HOME grants received in fiscal year 2009. Between 1 and 4 AIs were selected from each region with slightly more from grantees with large awards ($500,000 or more) since these were more prevalent in the original sample and in the subset of outdated AIs. GAO has previously reported on internal control standards, as well as HUD’s oversight of the CDBG program and federal fair lending. We consulted these reports as necessary to draw conclusions about HUD’s AFFH oversight program. We conducted this performance audit from October 2009 to September 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Wesley Phillips, Assistant Director; Farah B. Angersola; Emily Chalmers; William Chatlos; Jennifer Cheung; Pamela Davidson; Laurie Ellington; Delores Hemsley; Simin Ho; Fred Jimenez; Reginald Jones; John McGrail; Mark Molino; and Dae Park made significant contributions to this report.
|
Pursuant to the Fair Housing Act, Department of Housing and Urban Development (HUD) regulations require grantees, such as cities, that receive federal funds through the Community Development Block Grant (CDBG) and HOME Investment Partnerships Program (HOME) to further fair housing opportunities. In particular, grantees are required to prepare planning documents known as Analyses of Impediments (AI), which are to identify impediments to fair housing (such as restrictive zoning or segregated housing) and actions to overcome them. HUD has oversight responsibility for AIs. This report (1) assesses both the conformance of CDBG and HOME grantees AIs' with HUD guidance pertaining to their timeliness and content and their potential usefulness as planning tools and (2) identifies factors in HUD's requirements and oversight that may help explain any AI weaknesses. GAO requested AIs from a representative sample of the nearly 1,200 grantees, compared the 441 AIs received (95 percent response based on final sample of 466) with HUD guidance and conducted work at HUD headquarters and 10 offices nationwide. On the basis of the 441 AIs reviewed, GAO estimates that 29 percent of all CDBG and HOME grantees' AIs were prepared in 2004 or earlier, including 11 percent from the 1990s, and thus may be outdated. HUD guidance recommends that grantees update their AIs at least every 5 years. GAO also did not receive AIs from 25 grantees, suggesting that, in some cases, the required documents may not be maintained, and several grantees provided documents that did not appear to be AIs because of their brevity and lack of content. GAO reviewed 60 of the current AIs (those dating from 2005 through 2010) and found that most of these documents included several key elements in the format suggested in HUD's guidance, such as the identification of impediments to fair housing and recommendations to overcome them. (See table below for common impediments identified in 30 of these 60 current AIs.) However, the vast majority of these 60 AIs did not include time frames for implementing their recommendations or the signatures of top elected officials, as HUD guidance recommends, raising questions about the AI's usefulness as a planning document. As a result, it is unclear whether the AI is an effective tool for grantees that receive federal CDBG and HOME funds to identify and address impediments to fair housing. HUD's limited regulatory requirements and oversight may help explain why many AIs are outdated or have other weaknesses. Specifically, HUD regulations do not establish requirements for updating AIs or their format, and grantees are not required to submit AIs to the department for review. A 2009 HUD internal study on AIs, department officials, and GAO's work at 10 offices identified critical deficiencies in these requirements. For example, HUD officials rarely request grantees' AIs during on-site reviews to assess their compliance with overall CDBG and HOME program requirements, limiting the department's capacity to assess AIs' timeliness and content. While HUD initiated a process to revise its AI regulatory requirements in 2009, what the rule will entail or when it will be completed is not clear. In the absence of a department-wide initiative to enhance AI requirements and oversight, many grantees may place a low priority on ensuring that their AIs serve as effective fair housing planning tools. GAO recommends that, through regulation, HUD require grantees to update their AIs periodically, follow a specific format, and submit them for review. HUD neither agreed nor disagreed with the recommendations but noted recent efforts to improve compliance and oversight.
|
The United States, along with its coalition partners and various international organizations and donors, has continued to support efforts to rebuild Iraq in the aftermath of the war that replaced Iraq’s previous regime. From April 2003 to June 28, 2004, the CPA served as Iraq’s interim government and was responsible for overseeing, directing, coordinating, and approving rebuilding efforts. With the establishment of Iraq’s interim government, the CPA ceased to exist and its responsibilities were transferred to the Iraqi government or to other U.S. agencies. The Department of State is now responsible for overseeing U.S. efforts to rebuild Iraq. DOD’s Project and Contracting Office (PCO) and the U.S. Army Corps of Engineers have played a significant role in awarding and managing reconstruction contracts. USAID has been responsible for various reconstruction and developmental assistance efforts, including those related to capital construction projects, local governance, economic development, education, and public health. As figure 1 demonstrates, the battle space in Iraq can best be described as complex. A complex battle space is one where military forces, civilian U.S. government agencies, international organizations, contractors, nongovernmental organizations, and the local population share the same geographical area. Included on the complex battle space are private security providers. While there is no mechanism in place to track the number of private security providers doing business in Iraq or the number of people working as private security employees, DOD estimates that there are at least 60 private security providers working in Iraq with perhaps as many as 25,000 employees. The providers may be U.S. or foreign companies and their staffs are likely to be drawn from various countries, including the United States, the United Kingdom, South Africa, Nepal, Sri Lanka, or Fiji, and may include Kurds and Arabs from Iraq. Generally, private security providers provide the following services: Static security – security for housing areas and work sites. Personal security details – security for high-ranking U.S. officials. Security escorts – security for government employees, contractor employees, or others as they move through Iraq. Convoy security – security for vehicles and their occupants as they make their way into Iraq or within Iraq. Security advice and planning. The CPA issued a number of orders or memoranda to regulate private security providers and their employees working in Iraq. Among these are CPA Order number 3, (Revised)(Amended) which described the types of weapons that can be used by private security providers; CPA Order number 17 (Revised), which stated that contractors (including private security providers) will generally be immune from the Iraqi legal process for acts performed in accordance with the terms and conditions of their contracts; and CPA memorandum number 17, which stated that private security providers and their employees must be registered and licensed by the government of Iraq. According to security industry representatives we contacted, there are no established U.S. or international standards that identify security provider qualifications in such areas as training and experience requirements, weapons qualifications, and similar skills that are applicable for the type of security needed in Iraq. Some security industry associations and companies have discussed the need for and desirability of establishing standards, but as of March 2005 such efforts are only in the preliminary stages of development. U.S. civilian government agencies and reconstruction contractors have had to contract with private security providers because it is not part of the U.S. military’s stated mission to provide security to these organizations. U.S. forces in Iraq provide security to contractors and DOD civilians who support military operations. The Ambassador is charged with generally ensuring the security of most executive branch employees in Iraq. Government agencies have contracted with a number of private security providers to provide personnel, escort, and site security. Reconstruction contractors are generally responsible for providing for their own security according to the terms of their contracts, and they have generally done so by contracting with private security providers. The contractors’ efforts to obtain suitable security providers have met with mixed results. More than half of the contractors awarded contracts in 2003 replaced their security providers. Contractor officials attributed this turnover to various factors, including the contractors’ need to acquire security services quickly, their lack of knowledge of the security market and potential security providers available to provide the type of security services required in Iraq, and the absence of useful agency guidance. Finally, while the U.S. military is not responsible for providing security for civilian agencies and reconstruction contractors, it does provide some services, such as emergency medical support, to U.S. government-funded contractors. The stated mission of U.S. military forces in Iraq is to establish and maintain a secure environment, allow the continuance of relief and reconstruction efforts, and improve the training and capabilities of the Iraq Security Forces. As part of this mission, U.S. forces in Iraq provide security for DOD civilians who deploy with the force, non-DOD U.S. government employees who are embedded with the combat forces and contractors who deploy with the combat force. Among the contractors who deploy with the force are those that provide maintenance for weapon systems, those who provide linguistic and intelligence support to combat forces, and those who provide logistics support. Contractors who deploy with the force generally live with and directly support U.S. military forces and receive government- furnished support similar to that provided to DOD civilians. According to CENTCOM officials, the military uses soldiers rather than private security providers to provide security to contractors, civilians, facilities, or convoys which support combat operations because of concerns regarding the status of security personnel under the law of international armed conflict. This body of law considers contractors who deploy with the force generally to be noncombatant civilians accompanying the force who may not take a direct part in hostilities. CENTCOM is concerned that using armed private security employees to protect clearly military activities would risk a change in status for these contractors from noncombatants to illegal combatants. Thus, the private security employees could lose the protections otherwise granted contractors accompanying the force under international law. At the time we published our report, DOD was in the process of establishing its first departmentwide policy on the military’s security responsibilities for contractor personnel. The draft directive and instruction specify that the military shall develop a security plan for protection of contractor personnel and the contracting officer shall include in the contract the level of protection to be provided to contractor personnel. In appropriate cases, the combatant commander shall provide security through military means, commensurate with security provided DOD civilians. In May 2005, DOD also issued a new standard contract clause in the Defense Federal Acquisition Regulation Supplement (DFARS), to be included in all DOD contracts involving support to deployed forces stating that the Combatant Commander (for example, the CENTCOM Commander) will develop a security plan to provide protection, through military means, of contractor personnel engaged in the theater of operations unless the terms of the contract place the responsibility with another party. Prior to the issuance of the new contract clause, the Army’s policy expressly required Army commanders to provide security for deployed contractors, while the Air Force’s policy gave the Air Force the option of whether or not to provide force protection to Air Force contractors. It is important to note, however, that the proposed DOD departmentwide policy, procedures and standard contract clause do not cover non-DOD government contractors who may be in a military theater of operations. As discussed in the following, these contractors are responsible for providing their own security. The State Department is responsible for the security of most of the executive-branch U.S. Government employees located in Iraq. According to the President’s Letter of Instruction, the U.S. Ambassador, as Chief of Mission, is tasked by the President with full responsibility for the safety of all United States government personnel on official duty abroad except those under the security protection of a combatant commander or on the staff of an international organization. The embassy’s Regional Security Officer is the Chief of Mission’s focal point for security issues and as such establishes specific security policies and procedures for all executive branch personnel who fall under the Chief of Mission’s security responsibility. In June 2004, representatives from the Department of State and DOD signed two memoranda of agreement to clarify each department’s security responsibilities in Iraq. Among other things the agreements specify that In general, the Chief of Mission is responsible for the physical security, equipment, and personnel protective services for U.S. Mission Iraq; The Commander, CENTCOM is responsible for providing for the security of the International Zone as well as regional embassy branch offices throughout Iraq; Military capabilities may be requested by the Chief of Mission to provide physical security, equipment, and personal protective services only when security requirements exceed available Marine Security Guard Detachment, Department of State Diplomatic Security Service, and Department of State contracted security support capabilities; U.S. forces will provide force protection and Quick Reaction Force support outside the International Zone, to the extent possible, for Embassy personnel and activities; and The Ambassador has security responsibility for DOD personnel under the authority of the Chief of Mission. This includes the Marine Security Detachment and personnel working for the PCO. In Iraq, the State Department, USAID, the U.S. Army Corps of Engineers, and the CPA contracted with commercial firms to provide security. Our review of six agency-awarded security contracts, awarded between August 2003 and May 2004, showed that as of December 31, 2004, the agencies had obligated nearly $456 million on these contracts. In turn, the private security providers had billed the agencies about $315 million by that date for providing various services, including personal security details; security guards; communications; and security management. The companies providing security for U.S. government agencies may be U.S. or foreign. For example, while USAID contracted with a U.S. firm, the U.S. Army Corps of Engineers and the PCO are using British companies to meet their security requirements. Security for the Ambassador is provided by a U.S. company, and only U.S. citizens are used to provide protection. Security providers who provide security for executive branch employees follow the procedures and policies established by the Regional Security Officer. For example, one security provider told us that the Regional Security Officer recently increased the number of cars required for moving people within Iraq. The provider’s representative told us that they were obligated to comply with the Regional Security Officer’s instructions even though the contract was not awarded by the State Department and the company does not provide security for State Department personnel. Contractors engaged in reconstruction efforts were generally required to provide for their own security, and they have done so by awarding subcontracts to private security providers. Contractors did not anticipate the level of violence eventually encountered in Iraq and found themselves needing to quickly obtain security for their personnel, lodgings, and work sites. As of December 31, 2004, our review of 15 reconstruction contracts for which we had data found that the contractors had obligated more than $310 million on security subcontracts, and in turn, the security providers had billed the contractors more than $287 million. The contractors’ efforts to obtain suitable security providers met with mixed results, as many subsequently found that their security provider could not meet their needs. Overall, we found that contractors replaced their security providers on five of the eight reconstruction contracts awarded in 2003 that we reviewed. This was attributable, in part, to the contractors’ need to acquire security services quickly, their lack of knowledge of the security market and potential security providers available for the type of security services required for Iraq, and the absence of useful agency guidance. Information reflected in the agencies’ own contracts for security, such as training and weapons qualifications requirements, could have assisted the contractors in identifying potential criteria for evaluating security providers and in structuring their subcontracts. Agency officials expected that the post-conflict environment in Iraq would be relatively benign and would allow for the almost immediate beginning of reconstruction efforts. During a discussion with DOD we were told that this expectation was based on determinations made at the most senior levels of the executive branch and the contracting officials were bound to reflect that expectation in their requests for proposals. Consequently, they made few or no plans for any other condition. Reconstruction contractors shared this perspective, relying upon the language in the agency requests for proposals and the comments of agency representatives at pre-proposal and other meetings. Our discussions with contractor officials found that they anticipated providing for only a minimal level of security under their contracts, such as hiring guards to prevent theft and looting at residential and work sites. In one case, the contractor expected that the military would provide security for its personnel. Our review of the agencies’ request for proposals and other documents found that they were consistent with this expectation. For example, our review of five contracts awarded by late July 2003, including four awarded by USAID and one awarded by the U.S. Army Corps of Engineers, found that USAID’s requests for proposals instructed the contractors that work was to begin only when a permissive environment existed. Contractors were given little guidance concerning security for their personnel and facilities and were not asked to estimate security costs as part of their proposals. The U.S. Army Corps of Engineers’ request for proposal noted that the military was expected to provide security for the contractor and, thus, the contractor was not required to propose any security costs. According to agency and contractor officials, the Iraqi security environment began to deteriorate by June 2003, although two contractors noted that the bombing of the United Nations compound in August 2003 made it apparent that the insurgency was beginning to target nonmilitary targets (see figure 2). Contractor officials told us that as the security environment worsened they unexpectedly found themselves in immediate need of enhanced security services. These officials told us that they received little guidance from the agencies relative to possible security providers. We found that the contractors’ efforts to obtain security providers often met with mixed results. For example: One contractor, awarded a contract by the U.S. Army Corps of Engineers, expected that the U.S. military would provide security for its personnel. That contractor expressed concern, however, that the military protection being provided was insufficient to ensure its employees’ safety and to allow for the performance of its mission and subsequently stopped work at one of its locations. In June 2003, the Army finally told the contractor that it did not have adequate forces to continue to provide security as promised, and advised the contractor to acquire its own security. Following a limited competition, the contractor awarded a subcontract to a security provider in June 2003. In this case, the contractor has been satisfied with the services provided and retained the security provider when the contractor was subsequently awarded another reconstruction contract in June 2004. One USAID reconstruction contractor told us it quickly awarded a non- competitive subcontract to a security provider in July 2003. Within three months, the security company notified the reconstruction contractor that it was pulling its employees out of the country. As a former prisoner-transport service firm trying to expand into the protective services area, it discovered it lacked sufficient capacity to fulfill its contract requirements in Iraq. The reconstruction contractor subsequently conducted a competition among security providers already operating in Iraq to meet its needs. Another reconstruction contractor initially hired a security service provider in October 2003. A contractor official stated that it soon became apparent that the security provider did not have the capacity to meet its security needs. As a result, the contractor awarded another subcontract, on a sole-source basis, to a security provider to augment the security services provided to its personnel. Three of the reconstruction contractors we reviewed hired a newly established security provider company that was marketing itself in Iraq in mid- to late 2003. Officials representing one contractor told us that the provider was the only known provider capable of meeting their needs; officials for another contractor told us that they selected the provider based, in part, on its reputation. Each of the contractors, however, for various reasons, replaced the security provider. Subsequently, this security provider has been suspended from receiving further government contracts due to allegations of fraudulent billing practices. Overall, we found that five of the eight reconstruction contractors that were awarded contracts in 2003 that we reviewed replaced their initial or second security provider with another company, while in other cases, the contractors needed to augment the security services provided by their initial provider. As shown in figure 3, two contractors have awarded up to four contracts for security services. Contractor officials attributed this turnover to various factors, including the urgent need to obtain security, the increasing threat level, their lack of knowledge of potential sources and the security market, and the absence of useful agency guidance. In this latter regard, the detailed standards and requirements in their own agency security contracts may have provided useful assistance to reconstruction contractors in identifying potential criteria for evaluating security providers and in structuring their subcontracts. For example, the USAID security services contract, awarded in August 2003, contained a detailed and required organization structure to be used by the contractor, with titles, duties and responsibilities of various levels of security providers specified; requirements for background checks on potential employees and provisions for agency approval and acceptance of those employees; detailed standards of conduct for contractor employees; language, health, and training requirements; weapons capability requirements; and instructions regarding providing armored vehicles. Our review of five other agency security contracts awarded directly to private security providers from December 2003 through May 2004 for the protection of agency personnel in Iraq found that, to varying degrees, most of the cited areas were addressed. Conversely, our review of the subcontracts awarded by the reconstruction contractors to their security providers generally contained far less information. According to most contractor officials with whom we spoke, information similar to that included in the agency’s contracts would have assisted them in defining their security needs and structuring their security subcontracts. Some contractor officials also noted that agency assistance with identifying and vetting potential security provider companies would have been very useful or would be useful in future similar situations. They discussed the possibility of a qualified vendors list, or, if time permitted, the establishment of a multiple award schedule of qualified security providers, which contractors could use to quickly contract for their security needs through competitive task orders. Agency officials believed that information regarding personnel qualifications and competent providers could be made available to contractor personnel in future efforts, especially if the information was provided for the contractor’s consideration, rather than being a contract requirement. For example, one agency official noted that his agency’s requests for proposals for security services are publicly available. Some officials believed that making information a contractual requirement would infringe upon the contractor’s privity of contract with its subcontractors and might pose a potential government liability should such requirements later prove inadequate. Other officials believed that it should be the contractor’s responsibility to research and decide for itself its own needs and sources of security services without assistance from the government. According to U.S. officials and contractor personnel we interviewed, U.S. military forces in Iraq will provide, when assets are available, emergency quick reaction forces to assist contractors who are engaged in hostile fire situations. The military is also providing other support services to U.S. government-funded contractors, to include private security providers. For example, U.S. military forces will assist with the recovery and return of contractor personnel who have been kidnapped or held hostage. Additionally, the U.S. military also provides medical services above the primary care level to contractors. These services include hospitalization, as well as laboratory and pharmaceutical services, dental services, and evacuation services, should the patient require them. In addition, the military is providing medical support to private citizens, third country nationals, and foreign nationals when necessary to save life, limb, or eye- sight. Finally, contractors are entitled to receive mortuary affairs services. DOD is providing these services pursuant to authorities under Title 10, United States Code, as well as a variety of DOD Directives, a June 2004 support agreement between DOD and the Department of State, National Security Presidential Directive 36 (which governs the operations of the U.S. government in Iraq) and specific contract provisions. The military and the private security providers in Iraq have an evolving relationship based on cooperation and coordination of activities and the desire to work from a common operating picture. However, U.S. forces in Iraq do not have a command and control relationship with private security providers or their employees. Initially, coordination between the military and private security providers was informal. However, since the advent of the Reconstruction Operations Center in October 2004, coordination has evolved into a structured and formalized process. While contractors and the military agree that coordination has improved, some problems remain. First, private security providers continue to report incidents between themselves and the military when approaching military convoys and checkpoints. Second, military units may not have a clear understanding of the role of contractors, including private security providers, in Iraq or of the implications of having private security providers in the battle space. According to CENTCOM officials and military personnel who have been stationed in Iraq, U.S. military forces in Iraq do not have a command and control relationship with private security providers or their employees. According to a DOD report on private security providers working in Iraq, U.S. military forces in Iraq have no command and control over private security providers because neither the combatant commander nor his forces have a contractual relationship with the security providers. Instead, military and security provider personnel who served in Iraq described a relationship of informal coordination, where the military and private security providers meet periodically to share information and coordinate and resolve conflicts in operations. Despite a lack of command and control over private security providers and their employees, commanders always have authority over contractor personnel, including private security provider personnel, when they enter a U.S. military installation. Commanders are considered to have inherent authority to protect the health and safety, welfare, and discipline of their troops and installation. This authority allows the commander to establish the rules and regulations in effect at each installation. For example, an installation commander may determine traffic regulations, weapons policies, force protection procedures, and visitor escort policies. Contractors, including private security providers, who fail to follow the military’s rules and regulations while they are on the installation can be prohibited from entering the installation and using its facilities. As an example, one Army official told us that his unit had barred some private security employees from using the unit’s dining facilities because the private security employees insisted on carrying loaded weapons into the dining facility. The unit did not allow loaded weapons in the dining facility for safety reasons. Coordination between the military and the private security providers has evolved from an informal coordination based on personal relationships to a more structured, although voluntary, mechanism established by the Project and Contracting Office (PCO). According to military officials, contractors, and security providers coordination between the military and security providers was initially done informally. When a private security provider arrived in a unit’s area of operation, the security provider would try to meet with key officials of the unit and establish a relationship. A private security provider we spoke with told us that the results of this informal coordination varied based on the individual personalities of the military and provider personnel. According to some security providers, although many military commanders were very interested in establishing a relationship with the security providers, others were not. Additionally, coordination was inconsistent. For example, one officer who had served with the 4th Infantry Division in Iraq told us that coordination in his area was mixed. According to the officer, some security providers, such as the one providing security for the Iraqi currency exchange program, would always coordinate with the division before moving through the division’s area of operations but another contractor rarely coordinated with the division. This is similar to information we obtained from officials of the 2nd Armored Cavalry Regiment. One officer from one of the regiment’s squadrons told us that contractors that worked within the unit’s area of operation generally coordinated with the regiment while those who were traveling in or through his unit’s area of operation generally did not coordinate with the regiment. He also told us that on one occasion security providers escorted the CPA administrator into their area of operation without the squadron’s knowledge and while the squadron was conducting an operation in Najaf. According to the officer, a fire fight broke out at the CPA administrator’s location and the squadron had to send troops to rescue the CPA administrator and his party. This had a significant impact on its operation, according to the officer. Another officer, who served on the Combined Joint Task Force-7 staff, told of instances when contractors died and the division commander did not know that the contractors were operating in his area of operations until he was instructed to recover the bodies. Finally, according to a military officer serving with the PCO at the time of our review, the genesis of the Reconstruction Operations Center (ROC) (discussed next) was the need to improve coordination between contractors and the major subordinate commanders. The ROC serves as the interface between the military and the contractors in Iraq and is located within the PCO. In May 2004, the Army awarded a contract to a private security provider to provide security for PCO personnel and to operate the ROC, shown in figure 4. The goal of the ROC, which became operational in October 2004, is to provide situational awareness, develop a common operating picture for contractors and the military, and facilitate coordination between the military and contractors. The national ROC is located in Baghdad and six regional centers are co- located with the military’s major subordinate commands, to enhance coordination between the military and the private security providers. Figure 5 shows the locations of the regional centers. Participation in the ROC is voluntary (although some DOD officials told us that participation should be mandatory) and is open (at no cost) to all U.S. government agencies, contractors, and nongovernmental organizations operating in Iraq. The ROC and the regional centers are staffed with a combination of military, U.S. government civilian, and contractor personnel who provide a number of services for private security providers and others. Among the services the ROC provides are: Intelligence information. The military provides unclassified intelligence information to the ROC for dissemination to contractors. Intelligence information is updated daily and information is available on a password- protected Web site and through daily intelligence briefings. In addition, contractors can request specific threat assessments on future building sites and planned vehicle routes. Contractors use the ROC to pass on information about incidents and threats to coalition forces as well. Military assistance. The ROC serves as the 911 for contractors who need military assistance. Contractors who need assistance contact either the national ROC or the regional ROCs and ROC personnel contact the closest military unit and ask it to provide assistance. Assistance, such as a quick reaction force or medical assistance, is provided if military assets are available. Security providers we spoke with said that they rarely call for a quick reaction force because incidents with insurgents are usually over within a matter of minutes but on some occasions the quick reaction forces have proved to be very helpful. For example, one after action report described an incident in February 2005 in which a private security team was ambushed by 20 insurgents and attacked by small arms fire and three rocket-propelled grenades. The contractors contacted both the regional ROC in Mosul and the national ROC in Baghdad. The military responded with fixed wing assets within 15 minutes and a rotary wing quick reaction force escorted the team safely back to Mosul. Contractors more frequently receive medical assistance from the military and described the assistance they received as excellent. Figure 6 depicts the process used to request assistance through the ROC or the regional ROCs. Improved communications. Communications with the military can be difficult in Iraq because of a lack of radio interoperability between the military and contractors. The ROC facilitates communications between the military and contractors. First, the ROC provides contact numbers for the military to private security providers to use when they are moving around in Iraq. Second, the ROC will ensure that the military is aware of contractor movements. Security providers who so choose can provide the ROC with information on convoy movements, which the ROC will forward to the appropriate military commands. Third, the ROC can contact the military to provide assistance to contractors, and finally, the ROC can track convoys through a real-time tracking system that uses the global positioning system and includes a communications link with the ROC if assistance is needed. While security providers, reconstruction contractors, and military representatives of the PCO believe that the ROC has improved coordination on the complex battle space in Iraq, both the private security providers and the military believe that several coordination issues remain to be resolved. Security providers and military officials expressed continuing concern about incidents between security providers and the military when approaching military convoys and checkpoints and the need for a better understanding of the complex battle space by both private security providers and the military. One of the coordination issues that contractors and the military continued to be concerned about is blue on white violence. Blue on white violence is the term used by contractors and the military to describe situations when the military fires at friendly forces (such as contractors) or, as happens less frequently, when private security employees fire at military forces. An analysis of incident reports completed by the ROC indicates that these incidents happen most frequently when contractors encounter a military checkpoint or a military convoy. Private security providers have told us that they are fired upon by U.S. forces so frequently that incident reports are not always filed with the ROC. According to some incident reports filed with the ROC, some contractors believe that U.S. forces have fired on private security provider vehicles without provocation. For example, one security company official reported that his convoy was traveling on a route in Iraq when a U.S. military convoy approached. According to the report, the security convoy identified itself using generally recognized identification procedures and pulled off the road to allow the military convoy to pass. After about half of the 20-vehicle convoy had passed, a gunner in the military convoy began firing at the security convoy. According to the after incident report filed with the ROC, no injuries or damage resulted from this incident. A similar incident happened on the road from the International Zone to the Baghdad airport. As in the previous incident, part of a U.S. military convoy passed the private security convoy without incident when a gunner in the fourth vehicle of the convoy began to fire at the lead vehicle in the private security convoy. After this incident, the private security team leader received an apology from the servicemember who had fired on the security company vehicle. As a result of this incident, the company’s vehicle was rendered unserviceable. In another incident report, a private security provider documented an incident at a U.S. military checkpoint. According to the report, a security convoy had slowed to approach the checkpoint, and was then fired on by a U.S. soldier. The report went on to say that no verbal or hand warnings were given and no reason was given for the shooting. According to representatives of the security providers and the former director of security for the PCO, many of these incidents happen because of the military’s concerns over insurgents using vehicle-borne improvised explosive devices, as well as the inexperience of some U.S. troops. Reducing the number of blue on white incidents is a high priority for the U.S. military, the PCO, private security providers, and the Private Security Company Association of Iraq, a Baghdad--based association that works with both the U.S. government and the Iraqi government to resolve issues related to private security providers. In late December 2004, in an effort to reduce the number of blue on white incidents, the Multi National Corps- Iraq (MNC-I) issued an order to major subordinate commands in Iraq establishing procedures for private security providers to use when approaching military convoys and military checkpoints. MNC-I directed the subordinate commanders to implement the procedures detailed in the order and to educate all private security providers and military on the procedures. Among the procedures were (1) a prohibition on nontactical vehicles (such as the vehicles used by private security providers) passing moving military convoys; (2) a requirement that warning shots, when fired, be aimed away from a vehicle and demonstrate a clear intention to do harm if directions are not obeyed; and (3) a requirement that vehicles should maintain a distance of a least 200 meters from a military convoy. In early 2005, MNC-I completed an analysis of friendly-fire incidents that occurred between November 1, 2004 and January 25, 2005 to determine the top 10 lessons learned from such incidents. Among the top 10 lessons was the need for U.S. forces to comply with the rules of engagement, which require that U.S. troops determine that a person’s intent is hostile before the military uses deadly force. The other lessons learned were similar to the procedures included in the order. According to a PCO official, the top 10 list was provided to the private security providers. Despite the MNC-I order, blue on white incidents continue to occur and security providers remain concerned about the frequency of the attacks. In the 5 months (January to May 2005) since the order was issued, the ROC has received reports on 20 blue on white incidents and the number of actual incidents is likely to be higher since, as we noted previously, some providers no longer report these types of incidents. Data on the number of incidents for the 5 months before the order was issued was not available because the ROC did not start collecting information on blue on white incidents until November 2004. A ROC official noted that blue on white incidents had decreased in April 2005. He believed that the reduction was due, in part, to the adoption of the procedures outlined in the order. However, he also noted that the number of incidents could increase again as troops rotate in and out of Iraq or if terrorist attacks increase. Military units that deployed to Iraq received no guidance or training regarding the relationship between private security providers and the military prior to deploying. Representatives from the 2nd Armored Cavalry Regiment, the 82nd Airborne Division, and the 1st Marine Expeditionary Force all told us that they received no guidance from either CENTCOM or Combined Joint Task Force-7 and that their units had not developed any written procedures for dealing with private security providers. Furthermore, a representative of a unit that is preparing to deploy, the 101st Airborne Division, told us that it had not received any guidance on how to work with private security providers nor had it been directed to include information on private security providers, the PCO, or the ROC in its pre- deployment training, even though the 101st will be co-located with a regional ROC. To highlight the lack of training and guidance, representatives from one unit told us that they did not know there were private security providers in their battle space until the providers began calling for assistance. They also noted that any information about who would be in the battle space and the support the military should be providing would be useful. Several private security providers we spoke with told us that they believed it would be helpful if U.S. forces who deployed to Iraq received information on private security providers in Iraq. For example, the providers believed that U.S. troops needed more information on why private security providers are in Iraq, the impact of having private security providers there, and the operational styles of the private security providers. Army officials we spoke with believed that this type of information would be helpful and suggested that private security providers could use additional information about working with the U.S. military as well. Despite the significant role played by private security providers in enabling reconstruction efforts to proceed, neither the Department of State, DOD, nor USAID has complete data on the cost associated with using private security providers. For example, the quarterly report submitted by the Department of State to Congress on the status of reconstruction projects and funding does not provide information on security costs that are incurred under reconstruction contracts. Even at the contract level, the agencies generally had varying degrees of information on the costs associated with private security providers. On 15 reconstruction contracts we found that the cost to obtain private security providers and security- related equipment at the reconstruction contract level can be considerable, as it accounted for 15 percent or more on 8 of the 15 contracts we reviewed; on only 4 of those 8 contracts, however, did the agencies formally track security costs under a separate task order or contract line item. Agency and contractor officials acknowledged that security costs had diverted planned reconstruction resources and led to canceling or reducing the scope of certain reconstruction projects, though they also noted that other factors have affected reconstruction projects. The Secretary of State is responsible for submitting a quarterly report to Congress that outlines the current status of programs, initiatives, and funds dedicated to the Iraq reconstruction efforts. These quarterly reports provide information at the project and sector level—such as oil or electricity—and acknowledge the challenges and costs associated with the security environment in Iraq. For example, in its April 2005 report, the State Department noted that nearly $1.3 billion in funding has been, or will be, used in part to (1) cover unanticipated post-battle reconstruction costs, (2) cover indirect cost increases of contractors operating on cost-plus contracts that allow them to continue billing even during delays, and (3) account for increased security costs. The reports, however, do not identify the magnitude or impact of the costs associated with security providers on reconstruction efforts or available funding. Discussions with DOD and USAID personnel found that the financial and management information systems used to help prepare the report are not able to track costs incurred by reconstruction contractors for security services. Agency officials noted that to obtain such information would currently require the agencies to request such information from the contractors and manually prepare the information. Agency officials noted they have made inquiries on an ad hoc basis in the past, but cautioned that such requests can be burdensome for both the contractors and agency officials. Contractor officials acknowledge that the cost of private security services and security-related equipment, such as armored vehicles, has exceeded what they originally envisioned. In some cases, increased security costs resulted in reducing or canceling the scope of some reconstruction projects. For example: Contractor officials noted they were originally tasked to rehabilitate 23 electrical substations and had conducted site surveys and procured equipment for all 23 substations. According to contractor officials, however, the U.S. Army Corps of Engineers concluded that securing 14 of the substations would not be cost effective, and therefore reduced the scope to 9 substations. Contractor officials indicated that the equipment and materials procured for the 14 substations have been or will be turned over to the Iraqi Ministry of Electricity. In February 2004, USAID obligated an additional $33 million on one of its contracts to pay for unanticipated increases in security costs that left it short of funds to pay for construction oversight and quality assurance, as well as fund administrative costs. In March 2005, USAID cancelled two electrical power generation-related task orders totaling nearly $15 million to help pay for increased security costs being incurred at another power generation project in southern Baghdad. Contractor officials noted, however, that other factors also affected reconstruction progress, such as changes in priorities or higher material costs. For example, officials at one contractor noted that security had not been a significant factor delaying their work; rather, they pointed to delays in reviewing and approving projects and slower than anticipated release of funding. Similarly, USAID officials noted that, among other materials, the cost of concrete is significantly higher than anticipated, driving up the cost of many reconstruction projects. We found that at the contract level, agency personnel did not have consistent insight into security costs and their impact on reconstruction efforts. For example, agencies often did not require prospective bidders to propose meaningful security costs as part of their contract cost proposal nor require contractors to prepare a baseline security cost estimate at the time of contract award. Many of the contracts, including those awarded after the security environment began to deteriorate, were indefinite delivery contracts, in which the work to be accomplished was often described in general terms, with the specific work to be accomplished determined as task orders are issued. In several cases, agency personnel provided prospective contractors a sample task order to use in preparing their proposals. While the contractors’ cost and technical proposals described how they would approach security issues and provided an associated cost estimate, such estimates were only for evaluation purposes and did not reflect meaningful security costs. Overall, in only 3 of the 16 contracts we reviewed did contractors prepare an initial security cost estimate for the entire contract. Further, we found that in only 7 of the 16 contracts did the contractors regularly provide security-related cost information in either monthly progress reports or in separate contract line items or task orders. The level of information and insight provided varied greatly depending on the approach taken. For example, on three contracts, the contractor provided security cost-related information for each of its projects, but did not provide information at the total contract level. In one contract, security costs were reported on both the task order and contract level. In one contract, the security cost information was reported under a separate contract line item with other expenses, and visibility was more limited. In the remaining two contracts, the agency established separate task orders specifically to track security-related expenses at the contract level. In 15 of the 16 reconstruction contracts that we reviewed, we were able to obtain data on the costs of acquiring private security services and related security equipment by reviewing invoices that private security providers and security equipment providers submitted to contractors. Our analysis of this data found that at the reconstruction contract level there was considerable variation in estimated security costs as a percentage of total contract billings (see figure 7). Eight of the 15 contracts had security costs that exceeded 15 percent of total contract billings as of December 31, 2004; on 4 contracts, the percentage of contract billings accounted for by the cost of security subcontractors was more than 25 percent. On only 2 of those 8 contracts in which security costs exceeded 15 percent did agency personnel require the contractors to formally track and report security costs under a separate task order or contract line item. Though not required, one contractor reported incurred security costs on two contracts on its own initiative. While our analysis indicates that at the reconstruction contract level the cost of obtaining private security services can account for a significant percentage of the contract’s total cost, it does not reflect total private security costs. For example, reconstruction contractors did not always specifically track security-related costs incurred by their subcontractors or lower tier suppliers. According to contractor officials, in seven of the sixteen reconstruction contracts that we reviewed, at least one of their subcontractors provided for their own private security; in five of those seven contracts, all of the subcontractors were required to provide for their own security. The cost for a subcontractor to obtain private security services can be considerable. For example, in one case, the costs incurred by a major subcontractor amounted to almost $10 million, or nearly one- third of what the reconstruction contractor was paying for security. In another case, the costs incurred by a major subcontractor exceeded $3.5 million, or about 8 percent of what the reconstruction contractor was paying for security. Our analysis and discussions with agency and contractor officials identified several factors that influenced security costs, including (1) the nature and location of the work; (2) the type of security required and the security approach taken; and (3) the degree to which the military provided the contractor security services. For example, projects that took place in a fixed, static location were generally less expensive to secure than a project that extended over a large geographic location, such as electrical transmission lines. In other cases, contractors relied on former military personnel or other highly-trained professionals to provide security to their employees. Conversely, some contractors made more extensive use of local Iraqi labor and employed less costly Iraqi security guards. Lastly, some contractors were able to make use of security provided by the U.S. military or coalition forces. For example, several contractors had facilities within or near U.S.-controlled locations, such as Baghdad’s International Zone or on military bases, which reduced their need to obtain private security services. In another case, the contractor was provided a limited degree of protection by the U.S. Army. Agency and contractor officials had mixed opinions on the value of establishing separate reporting or tracking mechanisms. For example, some agency officials believed that having visibility into security-related costs enabled them to provide more effective contract oversight, and identify security cost trends and their impact on the project. Other officials noted that many factors affect the cost and progress of reconstruction efforts, including changes in planned funding or projects, material costs, and the inability to find qualified workers willing to work in Iraq. Consequently, they indicated that they generally try to manage the projects at a total project level, rather than by individual elements, such as security. For example, they noted that when reviewing project status reports with the contractors, they will question the contractors on the factors causing delays or cost increases. They were not certain that having specific insight into security costs would help them better manage or oversee their projects. Agency program and financial management officials noted that from a budgeting perspective, tracking security cost information could enable staff to provide better estimates of future funding requirements. Contractor officials generally indicated that establishing a separate task order or contract line item for security enabled them to more efficiently account for and bill security costs and to more accurately report reconstruction progress. For example, officials at one contractor noted that they often had several projects under way which required security. Prior to establishing a separate task order, the security provider would be required to allocate costs to each of the projects even though the security was provided for a given location, often resulting in lengthy and complex vouchers, higher potential for error, and increased administrative expenses. Once a separate task order was established, its security provider charged the costs incurred for providing security to the location, rather than each project, simplifying the billing and review process. Other contractor officials noted that the need to obtain security providers and security-related equipment often occurred during the early stages of the contract when the agencies had issued only a few task orders for specific reconstruction projects. Consequently, contractor officials told us they found themselves incurring considerable security-related expenses during the mobilization phase that had to be allocated to subsequent task orders, thereby increasing costs. These officials noted that allocating security costs to existing task orders would have resulted in the task’s cost exceeding the government’s estimate. Contractor officials indicated that a separate task order for security would have enabled them to better explain to agency personnel the cost of the reconstruction effort and the impact of security costs and enable them to account for and bill security costs more efficiently. Data from the Defense Manpower Data Center (DMDC) show that in fiscal year 2004, the attrition rates for the occupational specialties preferred by private security providers returned to the same or slightly lower levels than those seen prior to the institution of occupational stop losses in September 2001 despite the increased use of private security providers. Private security providers working in Iraq are hiring former servicemembers with a variety of skills, including servicemembers with military police or Special Operations experience. Military officials told us that they believe that servicemembers with these skills are separating from the military earlier than in prior years. We are unable to determine from this data whether servicemembers are leaving the military for positions with private security providers as the data can only demonstrate trends in attrition, not explain why people are leaving the military or what they intend to do after leaving the military. Private security providers prefer to hire former military members, particularly Special Operations forces, for their unique skills and experience. Servicemembers with Special Operations background are often hired to fill key positions, such as security advisors and project managers, and to provide personal security to high ranking government officials. These positions may pay as much as $33,000 a month. Other servicemembers may be hired to provide security to civilians in vehicle convoys with salaries between $12,000 and $13,000 per month, while some may be hired to provide site security for buildings and construction projects at somewhat lower salaries. For the most part, employees only receive these salaries when they are working in Iraq, typically 2 to 3 months at a time. All of the U.S.-based private security providers we spoke with told us that they do not actively recruit current servicemembers; however, they do recruit at military-sponsored transition job fairs, through the Internet, and with advertisements in military magazines and newspapers. Both Special Forces and military police personnel officials believe that attrition is increasing in their military specialties. For example, during a July 2004 hearing before the House Armed Services Committee, Subcommittee on Terrorism, Unconventional Threats and Capabilities, representatives from the U.S. Special Operations Command and the military services’ Special Operations commands noted that the number of Special Forces enlisted personnel retiring at 20 years (the first time they are eligible) has been increasing due, in part, to the increased opportunities available in civilian government and with contractors. In addition, representatives of the Naval Special Operations Command and the Air Force Special Operations Command also noted that they were seeing increased attrition rates among those servicemembers with 8 to 12 years of service. According to these representatives, servicemembers leaving at this point in their careers are also leaving for opportunities with contractors. Army officials have also expressed concerns about attrition in the military police force. For example, officials from the military police personnel office at the Army’s Human Resources Command told us that they have seen a significant number of senior noncommissioned officers leave the military police for positions with private security providers. These officials also told us they have seen the average length of service for colonels in the military police branch decrease from 28 to 25 years. Furthermore, in an e- mail provided by the Army’s Human Resources Command, a senior noncommissioned officer at the 16th Military Police Brigade noted that the brigade did not meet its reenlistment targets in fiscal year 2004. Finally, the Army Central Command’s Provost Marshall in July 2004 told us that he had lost four of his eight senior noncommissioned officers to higher paying private security providers within the last year and was expecting to lose two more senior noncommissioned officers. He also noted that he had lost more than half of his company grade officers as well. Efforts are being taken by both the military police and Special Forces communities to address retention concerns. For example, the Army plans to double the size of its military police force from 15,500 to 30,000 by 2006 and the Special Operations Command plans to increase its force size from 13,200 to 15,900 over the next 5 to 6 years. Increasing the size of the Army military police and Special Operations will decrease the high operational tempo and relieve some of the stress on military personnel, which these communities believe contributed to higher attrition. In addition, DOD recently began to offer reenlistment bonuses to Special Operations personnel with 19 or more years of experience which range from $8,000 to those who reenlist for one year to as much as $150,000 for those who reenlist for an additional 6 years. While data from several sources indicate increased attrition in fiscal year 2004 compared to fiscal years 2002 and 2003 in the military skills sought by private security providers, these data also showed that attrition rates in fiscal year 2004 had returned to the levels seen in fiscal years 2000 and 2001, prior to the majority of the stop loss policies that have been instituted by the services at various times since September 2001. Table 1 shows the dates of occupational stop losses for each of the services. Each of the services added and released occupations from stop loss as the needs of the service dictated. For example, the Air Force placed all occupational specialties under a stop loss in September 2001 and then released a number of occupations from the stop loss in January and June 2002. As we noted, the Air Force ended all stop loss activities in June 2003. In the Army, Special Operations forces were placed under stop loss in December 2001 and were released from the stop loss in June 2003, while enlisted servicemembers who served as military police were placed under stop loss in February 2002 and were released from the stop loss in July 2003. Army officers serving as military police were placed under the stop loss in February 2002 and were released from the stop loss in June 2003. Data obtained from DMDC on the military occupational specialties preferred by private security providers revealed that several of these specialties show increased attrition in fiscal year 2004 over the attrition rates in fiscal year 2003. These specialties include: Air Force: Officer military police Army: Enlisted and Officer Infantry, military police, and Special Forces Marine Corps: Enlisted and Officer Infantry and military police Navy: Enlisted military police, Officer Special Forces, and Enlisted SEALs. For the specialties listed, the average attrition rates for each fiscal year are shown in figure 8. As seen in figure 8, the attrition rates for these specialties decreased in fiscal year 2002 and 2003 from their 2000 and 2001 levels and showed an increase in attrition in fiscal year 2004. These data also show that the levels of attrition seen in fiscal year 2004 were actually lower than those seen in fiscal years 2000 and 2001. The decrease in attrition rates seen in fiscal years 2002 and 2003 as compared to the rates seen in fiscal years 2000 and 2001 reflect attrition patterns that are seen during stop losses. Service officials told us that stop loss policies affect attrition rates; they can temporarily delay separations and artificially decrease attrition rates for the year of the stop loss. Officials at the Army Human Resources Command also found that stop loss policies can also increase attrition rates for the year preceding the stop loss. For example, the Army saw increased separations in 2002 for military police colonels in anticipation of their occupation-specific stop loss. Given the impact of stop loss policies on attrition, data may not accurately convey the typical personnel losses that would have occurred had the stop loss not been in effect as people left the military both in anticipation of stop loss and after stop loss was lifted. Thus, we are unable to determine whether the increase in attrition rates in fiscal year 2004 was due to the lifting of the stop loss policy or true increases in military attrition. Figure 9 shows a pattern of decline in attrition rates during the stop loss period followed by a rebound for Army Special Forces in fiscal year 2004. Attrition rates for enlisted Army Special Forces were almost identical in fiscal years 2000 and 2001, and declined through 2003 during the Army Special Forces specific stop loss, which was in effect from December 2001 to June 2003. However, after the stop loss was lifted, attrition rates for the enlisted Army Special Forces almost doubled from 6.5 percent in fiscal year 2003 to 12.9 percent in fiscal year 2004, a level which was about 25 percent higher than the fiscal year 2000 rate. Attrition rates for Army Special Forces officers also declined during the stop loss period and returned to just below the fiscal years 2000 and 2001 levels in fiscal year 2004. The Special Operations Command also provided us with continuation rates calculated by DMDC for the Army, Navy, and Air Force Special Operations Commands. Continuation rates were calculated by determining which personnel remained on active duty from one year to the next and are an alternative method used to demonstrate retention and attrition. The continuation rates showed an increase in losses in 2004 for the Army Special Operations and Navy Special Warfare Commands senior noncommissioned officers, as well as Army Special Operations warrant officers. Similar to the DMDC data provided to us, these commands also saw a decrease in losses (or a decrease in attrition rates) in 2002 after a stop loss was issued and, with the exception of the Navy Special Warfare warrant officers, an increase in losses (or an increase in attrition rates) after the stop loss was lifted. Additionally, as shown in figure 10, the continuation data for Army Enlisted Special Operations personnel with 14 through 19 years of service separated at only a slightly higher rate in 2004 than in the pre-stop loss years — fiscal years 2000 and 2001. In the July 2004 hearing before the House Armed Services Committee, Subcommittee on Terrorism, Unconventional Threats and Capabilities, the Senior Enlisted Advisor for the United States Special Operations Command stated that the loss of these mature, operationally experienced personnel creates critical operational risk for the Special Forces. According to the Special Operations Command officials with whom we spoke, because the command is losing some of its most experienced personnel, younger less experienced servicemembers are being promoted to leadership positions more quickly than in the past. This need to rely on less experienced personnel has created some concerns for the command. While available data indicate that attrition, in almost all of the military specialties favored by private security providers, has returned to pre- September 11, 2001 levels, the data do not indicate why personnel are leaving the military and what they are doing after they leave. Exit surveys conducted with servicemembers leaving the military do not include questions on the servicemembers’ future employment plans. Officials at the Army Human Resources Command told us that after September 11, 2001, the opportunities for employment in the security field became more widespread as government agencies as well as private companies and organizations recognized the need to improve their security. These officials as well as officials from the Special Operations Command noted that they are losing personnel not only to private security firms operating in Iraq but also to security management companies operating in the United States, and security operations in other government agencies. Service officials at these commands also attributed the attrition rates to other factors, such as the attraction of a strong civilian economy, high operational tempo, and concerns about various quality of life conditions. The reconstruction effort in Iraq is complex, costly, and challenging, in part, due to an urgent need to begin and execute reconstruction projects in an uncertain security environment. The extensive use of private security providers has raised a number of issues, particularly regarding how to facilitate methods contractors use to obtain capable providers. And, once security providers are actively working in an area, they must determine how best to establish effective coordination mechanisms with nearby military forces. While the experience in Iraq was certainly unique relative to historical reconstruction and assistance efforts, it is far less certain whether or not the future will find the United States engaged in reconstruction and assistance efforts in other hostile environments with costs that are likely to be significant. Much has been learned in Iraq over the past two years on this subject that can serve the United States and its contractors well in planning for and executing future reconstruction or assistance efforts. We are making four recommendations to address a number of immediate and long term issues: To assist contractors operating in hostile environments in obtaining security services required to ensure successful contract execution, we recommend that the Secretary of State, the Secretary of Defense, and the Administrator, U.S. Agency for International Development, explore options that would enable contractors to obtain such services quickly and efficiently. Such options may include, for example, identifying minimum standards for private security personnel qualifications, training requirements and other key performance characteristics that private security personnel should possess; establishing qualified vendor lists; and/or establishing contracting vehicles which contractors could be authorized to use. To ensure that MNF-I has a clear understanding of the reasons for blue on white violence, we recommend that the Secretary of Defense direct the Combatant Commander, U.S. Central Command, to direct the Commander, MNF-I, to further assess all of the blue on white incidents to determine if the procedures outlined in the December 2004 order are sufficient. Furthermore, if the procedures have not proven to be effective, we recommend that the Commander, MNF-I, develop additional procedures to protect both U.S. military forces and private security providers. To ensure that commanders deploying to Iraq have a clear understanding of the role of private security providers in Iraq and the support the military provides to them, we recommend that the Secretary of Defense develop a training package for units deploying to Iraq which provides information on the Reconstruction Operations Center, typical private security provider operating procedures, any guidance or procedures developed by MNF-I or MNC-I applicable to private security providers (such as procedures outlined in the December 2004 order to reduce blue on white incidents), and DOD support to private security provider employees. The training package should be re-evaluated periodically and updated as necessary to reflect the dynamic nature of the situation in Iraq. To improve agencies’ ability to assess the impact of and manage security costs in future reconstruction efforts, we recommend that the Secretary of State, the Secretary of Defense, and the Administrator, U.S. Agency for International Development, establish a means to track and account for security costs to develop more accurate budget estimates. DOD, the Department of State and USAID provided written comments on a draft of this report. Their comments are discussed below and are reprinted in appendixes II, III, and IV, respectively. DOD concurred with each of our recommendations, noting that it welcomed our assistance in improving how DOD and its contractors can plan for and effectively execute contracts in a complex and changeable security environment. Moreover, DOD described the steps it would take to implement some of our recommendations. The Department of State disagreed with our recommendation to explore options to assist contractors in obtaining private security services, citing concerns that the government could be held liable for performance failures. For example, while the Department noted that it could provide the criteria it utilizes to select its contractors on a non-mandatory basis, it expressed concern that contractors relying on government minimum standards could assert that performance failures were the result of the government establishing poor standards. The Department also noted it was unclear that a government-managed security contractor program would result in enhanced contractor security, compared to a contractor-managed security program. While our work found that contractors had difficulty in obtaining security providers that met their needs and that they would have benefited from the agencies’ assistance, we did not recommend a particular course of action nor recommend a government-managed security program. Rather, we recommended that the Department, working jointly with DOD and USAID, explore options to assist contractors that are unfamiliar with obtaining the type of security services needed in Iraq. Such an effort would necessarily entail a thorough assessment of the advantages, disadvantages and risk mitigation strategies of the potential options. Given the significance of contractors in accomplishing reconstruction objectives and the mixed results that they encountered when selecting their security providers, we continue to believe that thoroughly exploring potential options would be prudent. The Department of State did not indicate whether it agreed with our recommendation to establish a means to track and account for security costs in order to develop more accurate budget estimates. It noted that it can capture costs associated with direct security providers and work with prime reconstruction contractors to determine the feasibility of providing subcontract security costs. It is not clear to us from the Department’s comments how it intends to work with DOD and USAID to establish a uniform means to track and account for private security costs, which is essential given that DOD and USAID are the principal agencies responsible for awarding and managing the majority of reconstruction contracts. In written comments on a draft of this report, USAID found the report factually correct, but did not comment on the recommendations. We are sending copies of this report to the Chairman and Ranking Minority Member, House Committee on Government Reform; the Chairman and Ranking Minority Member, House Committee on Energy and Commerce; the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; and other interested congressional committees. We are also sending a copy to the Secretary of Defense; the Secretary of State; the Administrator, U.S. Agency for International Development; and the Director, Office of Management and Budget, and will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact Bill Solis at 202-512- 8365 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are found on the last page of this report. GAO staff who made major contributions to this report are included in appendix V. To determine the extent to which U.S. government agencies and contractors working in Iraq at the behest of the U.S. government have acquired security services from private security providers, we reviewed a wide array of documents to determine who was responsible for providing security to those types of organizations, including warning orders and fragmentary orders issued by the U.S. Central Command (CENTCOM), Combined Joint Task Force-7, Multi National Forces–Iraq, and Multi National Corps-Iraq to determine if any orders had been issued regarding providing security to U.S. government employees or contractors rebuilding Iraq; contracting documents such as statements of work, requests for proposals and contracts and contact modifications; Department of Defense (DOD) regulations and instructions that relate to the management of contractors during contingency operations; Departments of State and Defense memoranda of understanding regarding security and support; proposed guidance between the Department of State and the Department of Defense regarding contractor support; guidance to contractors prepared by the Coalition Provisional Authority (CPA) regarding contractor operations in Iraq; and Department of State rules and regulations, including the Foreign Affairs Manual. We also met with officials from CENTCOM to obtain the command’s position on the extent of the military’s responsibility to provide security to civilian government employees and contractors, including both contractors supporting military forces and those engaged in rebuilding Iraq. In addition, we met with or obtained information from Army and Marine Corps units that served in Iraq to discuss their understanding of the military’s responsibility to provide security to contractors and civilian government employees and interviewed representatives of the State Department’s Office of Diplomatic Security to discuss the State Department’s use of private security providers in Iraq as well as representatives of other government agencies working in Iraq who have contracted with private security providers to provide security to employees and facilities. To determine how agencies addressed security needs when planning for and awarding Iraq reconstruction contracts, we interviewed officials at the CPA; DOD, including the U.S. Army Corps of Engineers and the Project and Contracting Office (PCO); the Department of State; and the U.S. Agency for International Development (USAID). We discussed the guidance and direction they received prior to awarding contracts and how such information was provided to the contractors. We reviewed various acquisition documents, including agency acquisition plans, requests for proposals, price negotiation memoranda, correspondence between contractors, and other relevant documents. We met with agency and contractor officials to discuss the nature and type of guidance provided relative to the expected security environment, the need for obtaining security services, and requirements and standards for security personnel or security-related equipment. We identified how security-related requirements were reflected in reconstruction contracts by selecting 16 contracts that were awarded to 10 reconstruction contractors. We selected these contracts using a nonprobabilistic methodology that considered such factors as the awarding agency; the year awarded; the contract’s expected dollar value; and the type, nature and location of the reconstruction activity. Nine of these contracts were awarded in 2003 and 7 were awarded in 2004. For each of these contracts, we obtained the contract and contract modifications issued as of December 31, 2004, totaling about $8.6 billion; relevant sections of the contractor's cost and technical proposal; security plans; security-related subcontracts; and other pertinent documents. We also obtained and reviewed 6 contracts that had been awarded by the U.S. Army Corps of Engineers, the Department of State, USAID and by an Army contracting agency for the CPA, for the protection of their personnel and facilities in Iraq to compare the type of security-related requirements incorporated within U.S. government contracts with those incorporated into contracts awarded to reconstruction contractors and, in turn, to subcontracts with security providers. We identified whether there are existing government or international standards relative to security providers that were applicable to the Iraqi security environment. We also spoke with agency security personnel, including the Department of State’s Office of Diplomatic Security and the Overseas Security Advisory Council. We also contacted representatives from relevant industry associations, including the International Peace Operations Association, International Security Management Association, and the American Society for Industrial Security. We also researched European security-provider standards and conducted a literature review of articles relating to the security provider industry. To assess the military’s relationship with private security providers, we met with or spoke to representatives of CENTCOM, Army Central Command, and the PCO (at the Pentagon and in Baghdad) to discuss issues related to the military’s authority over private security providers and reviewed a Department of Defense report to Congress addresses the use of private security providers in Iraq. We also met with or contacted representatives of Army and Marine Corps units that had been stationed in Iraq to determine if they had been provided guidance on working with private security providers and discussed issues related to command and control of private security providers. To assess the level of cooperation and coordination between the military and private security providers both before and after the advent of the Reconstruction Operations Center (ROC), we spoke with 9 private security providers working in Iraq as well as representatives of military units which had served in Iraq to determine the state of coordination prior to and after the ROC became operational. We spoke with representatives of the PCO to discuss the ROC’s role in coordinating the interactions between the military and private security providers and any actions the PCO was taking to improve coordination between private security providers and U.S. military forces. We also discussed coordination issues with the executive director of the Private Security Company Association of Iraq and several reconstruction contractors. We also reviewed information posted on the ROC Web site related to security and reviewed documents developed by the ROC to explain its operations and functions. To determine the extent to which government agencies assessed the costs associated with using private security providers and security-related costs, we reviewed various contractual documents, including the 16 reconstruction contracts and subsequent modifications, consent to subcontract requests, and monthly cost and progress reports submitted by the contractors we reviewed. We also met with agency and contractor officials to determine the means by which they maintained visibility over security providers and security-related expenses, as well as their general experiences in Iraq, the impact of security on reconstruction efforts, and the process by which they obtained security providers. We collected data on the costs associated with acquiring and using private security providers or in-house security teams; and the cost associated with acquiring security-related equipment, such as armored vehicles, body armor, communication equipment, and other security-related costs. We did not attempt to quantify the impact of the security environment on increased transportation or administrative expenses, on the pace of reconstruction efforts caused by security-related work stoppages or delays, or the cost associated with repairing the damage caused by the insurgency on work previously completed. We also excluded the cost associated with the training and equipping of Iraqi security forces, or the costs borne by DOD in maintaining, equipping, and supporting U.S. troops in Iraq. For the 16 contracts we reviewed, we identified whether the agencies or the contractors had initially projected the cost of obtaining private security services. We reviewed various documents, including agency acquisition strategy plans and price negotiation memoranda; the contractor’s cost proposals and security plans; and interviewed agency and contractor officials. We identified the actual costs incurred for security services and equipment by reviewing various cost documentation, including invoices, vouchers, and billing logs submitted by the contractors and their security provider(s) through the period ending December 31, 2004. We analyzed this information to determine the total amount billed by the contractor to the government; the amount billed by security subcontractors to the contractor; and the amount billed for other security-related expenses, such as armored vehicles, body armor, communication, transportation costs, lodging, and other security-related equipment. We estimated the percentage of costs accounted for by private security providers and for security-related equipment by comparing the combined amount billed for these activities to the total amount billed by the reconstruction contractor to the government. We did not attempt to comprehensively identify costs that may have been incurred by subcontractors or lower tier contractors. We did, however, request information from the contractors as to whether their subcontractors required security above that which would typically be required, and if so, whether the subcontractor arranged for its own security or relied on security provided by the reconstruction contractor. We obtained examples and cost information on selected cases in which subcontractors provided their own security. As part of our efforts, we reviewed pertinent sections of the Federal Acquisition Regulation, and in particular, the subcontractor competition and notification requirements provided for under Part 44; and relevant CPA, DOD, State Department, and USAID acquisition regulations, policy memoranda and guidance. We coordinated our work with and reviewed reports prepared by the Inspectors General for DOD, State, and USAID; the Special Inspector General for Iraq Reconstruction; and the Defense Contract Audit Agency. To determine whether private security providers were hiring former military servicemembers, we interviewed three private security providers from the United States that are working in Iraq and discussed the skill sets they hire. Additionally, we spoke with officials at the Marine Corps and Navy human resources commands; the Air Force’s Deputy Chief of Staff, Personnel; the Army's Human Resources Command Military Police Branch and the Special Operations Command Personnel Division to ascertain whether certain military occupational specialties and ranks were seeing increased attrition and if private security providers were affecting military attrition. We also reviewed a transcript of a congressional hearing on Special Operations Forces personnel issues held in July 2004. To assess the extent to which military occupational specialties utilized by private security providers in Iraq are seeing increased attrition we obtained attrition information from the Defense Manpower Data Center’s Active Duty Military Officer and Enlisted Master Files, which is an inventory of all individuals on active duty in the services. Our analysis was limited to active duty personnel and did not include reservists. The Center provided information on personnel numbers and losses for fiscal years 2000, 2001, 2002, 2003, and 2004. Attrition for the purposes of this report is an active duty member who is on active duty at the start of a given fiscal year and is no longer on active duty in the same service in the same pay category at the end of that fiscal year. An enlisted member who becomes a warrant or commissioned officer (or vice versa) or a member who changes services is considered to be a loss. The fiscal year lasts from October 1st of the previous year to September 30th of the named year. For example, fiscal year 2000 lasted from October 1, 1999 to September 30, 2000. Personnel numbers were calculated as the total number of members at the start of the fiscal year (for example, October 1, 1999 for fiscal year 2000). Losses are the endforce members who have attrited during the fiscal year (For example, for fiscal year 2000, losses would be the number of personnel attrited from October 1, 1999 to September 30, 2000.). We received data from the Defense Manpower Data Center on active duty attrition rates for five military occupational specialties: special forces, military police, infantry, para-rescue, and combat controller. These military occupational groupings were selected because they represented military occupational skills most sought after by private security providers working in Iraq, as determined through interviews with officials at the human resources commands and private security companies. These data were then analyzed to determine whether attrition rates had increased in the past five years and whether servicemembers were separating from the military at increasing rates in certain ranks or number of years of service. We assessed the reliability of the Defense Manpower Data Center’s Active Duty Military Personnel Master file by (1) reviewing existing information about the data and the system that produced them, and (2) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of this report. We visited or interviewed officials from the following organizations during our review: Bureau of Diplomatic Security, Washington, D.C.; U.S. Embassy, Amman, Jordan; The U.S. Agency for International Development, Washington, D.C., Baghdad, Iraq; and Amman, Jordan. Office of the Under Secretary of Defense, Personnel and Readiness, Military Personnel Policy, the Pentagon; The Defense Contract Audit Agency, Fort Belvoir, Virginia. Department of the Air Force Office of the Deputy Chief of Staff, Personnel, Force Management Division. United States Army Human Resources Command Military Police Branch, Alexandria, Virginia; United States Army Central Command (Rear), Fort McPherson, Georgia; Project and Contracting Office (Rear), the Pentagon; U.S. Army Corps of Engineers, Washington, D.C.; Southwestern Division, Dallas, Texas; Transatlantic Program Center, Winchester, Virginia; Gulf Regional Division, Baghdad, Iraq; The Army Contracting Agency, Fort Eustis, Virginia; 1st Armored Division, Wiesbaden, Germany; 82nd Airborne Division, Fort Bragg, North Carolina; 2nd Armored Cavalry Regiment, Fort Polk, Louisiana; 1st Cavalry Division, Fort Hood, Texas. Naval Personnel Command, Millington, Tennessee; Marine Corps Manpower Plans and Policy Division, Washington, D.C.; 1st Marine Corps Expeditionary Force, Camp Pendleton, California. United States Central Command, MacDill Air Force Base, Florida; United States Special Operations Command Personnel Division, MacDill Air Force Base, Florida. Aegis Defence Services, Ltd., London, United Kingdom; ArmorGroup, London, United Kingdom; BearingPoint Inc., McLean, Virginia; Bechtel National, Inc., San Francisco, California; Blackwater USA, Moyock, North Carolina; CONTRACK International, Inc., Arlington, Virginia; Control Risk Group, London, United Kingdom; Creative Associates International, Inc., Washington, D.C.; DynCorp International, Irving, Texas; Fluor Intercontinental, Inc., Greenville, South Carolina; General Electric, Atlanta, Georgia; Global Risk Strategies, London, United Kingdom; Kellogg Brown and Root Services, Inc., Houston, Texas; Olive Security, London, United Kingdom; Parsons Corporation, Pasadena, California; Perini Corporation, Framingham, Massachusetts; Research Triangle Institute, Research Triangle Park, North Carolina; Triple Canopy, Lincolnshire, Illinois; The Hart Group, London, United Kingdom; and Washington Group International, Inc., Boise, Idaho; and Princeton, New Jersey. American Society for Industrial Security International, Alexandria, International Peace Operations Association, Washington, D.C.; International Security Management Association, Buffalo, Iowa; Private Security Company Association of Iraq, Baghdad, Iraq; Professional Services Council, Arlington, Virginia. We conducted our review from May 2004 through June 2005 in accordance with generally accepted government auditing standards. In addition to the contacts named above, Steve Sternlieb, Timothy DiNapoli, Carole Coffey, Gary Delaney, John Heere, William Petrick, Timothy Wilson, Moshe Schwartz, Kate Walker, Robert Ackley, David Mayfield, and Sylvia Schatz made key contributions to this report.
|
The United States is spending billions of dollars to reconstruct Iraq while combating an insurgency that has targeted military and contractor personnel and the Iraqi people. This environment created a need for those rebuilding Iraq to obtain security services. GAO evaluated the extent to which (1) U.S. agencies and contractors acquired security services from private providers, (2) the U.S. military and private security providers developed a working relationship, and (3) U.S. agencies assessed the costs of using private security providers on reconstruction contracts. The civilian U.S. government agencies and reconstruction contractors in Iraq that GAO evaluated have obtained security services, such as personal and convoy security, from private security providers because providing security to them is not the U.S. military's stated mission. U.S. military forces provide security for those Department of Defense (DOD) civilians and contractors who directly support the combat mission. In Iraq, the Department of State and other federal agencies contract with several private security providers to protect their employees. Under their contracts, contractors rebuilding Iraq are responsible for providing their own security and have done so by awarding subcontracts to private security providers. As of December 2004, the agencies and contractors we reviewed had obligated more than $766 million for private security providers. The contractors' efforts to obtain suitable security providers met with mixed results, as they often found that their security provider could not meet their needs. Overall, GAO found that contractors replaced their initial security providers on more than half the 2003 contracts it reviewed. Contractor officials attributed this turnover to various factors, including the absence of useful agency guidance. While the U.S. military and private security providers have developed a cooperative working relationship, actions should be taken to improve its effectiveness. The relationship between the military and private security providers is one of coordination, not control. Prior to October 2004 coordination was informal, based on personal contacts, and was inconsistent. In October 2004 a Reconstruction Operations Center was opened to share intelligence and coordinate military-contractor interactions. While military and security providers agreed that coordination has improved, two problems remain. First, private security providers continue to report incidents between themselves and the military when approaching military convoys and checkpoints. Second, military units deploying to Iraq are not fully aware of the parties operating on the complex battle space in Iraq and what responsibility they have to those parties. Despite the significant role played by private security providers in enabling reconstruction efforts, neither the Department of State, nor DOD nor the U.S. Agency for International Development (USAID) have complete data on the costs of using private security providers. Even at the contract level, the agencies generally had only limited information readily available, even though agency and contractor officials acknowledged that these costs had diverted a considerable amount of reconstruction resources and led to canceling or reducing the scope of some projects. For example, in March 2005, two task orders for reconstruction worth nearly $15 million were cancelled to help pay for security at a power plant. GAO found that the cost to obtain private security providers and security-related equipment accounted for more than 15 percent of contract costs on 8 of the 15 reconstruction contracts it reviewed.
|
Aquaculture is defined as the production of any plant or animal in water and under controlled conditions. In the United States, aquaculture is a relatively new but rapidly growing industry. The value of U.S. aquaculture production more than quadrupled during the 1980s. Salmon is one of the four principal aquaculture products in the United States. Salmon farming operations begin in fresh water facilities, where ready-to-spawn broodstock or parent fish are stripped of eggs and sperm. The fertilized eggs are held in fresh water containers for about 2 months until they hatch. The hatchlings or fry are then raised in tanks from 4 to 15 months until they reach smolt stage, at which time they are capable of adapting to a salt water environment. Once the salmon reach this stage, they are transported to salt water net pens to begin the “grow out” phase. Depending on the species or variety of salmon, the fish are ready to be marketed for human consumption between 9 months and 2 years from the time they are placed in the pens. Farmed salmon production worldwide has increased from about 48,000 metric tons in 1985 to 331,000 metric tons in 1992. This increase in farmed production has transformed the international market for salmon. Currently, farmed salmon represents about 27 percent of the salmon brought to market worldwide. The principal international producers of farmed salmon are Norway, Chile, the United Kingdom, and Canada. Relative to these countries the United States is a minor producer, accounting for about 4 percent of total world production. U.S. production of farmed salmon is concentrated almost entirely in Maine and Washington State. In 1992, domestic farmed salmon production was about 12,000 metric tons, with Washington State accounting for approximately 40 percent of the total. U.S. production is expected to reach nearly 17,000 metric tons in 1995. Domestic production supplies approximately 23 percent of total U.S. consumption. Salmon farming in Washington State is a $40 million a year industry. There are about 18 salmon farming operations in Washington State. Some of these are only involved in raising salmon for human consumption, while others run hatcheries that produce fertilized eggs and smolts. According to Washington State producers, they have developed a market niche in the production of quality fertilized salmon eggs, which are then exported to such countries as Chile, Japan, and Canada. Canada is the principal supplier of farmed salmon to the United States. In 1992, Canadian exports, mainly from British Columbia, accounted for 44 percent of U.S. farmed salmon consumption. Canada exports about 75 percent of its total farmed salmon production to the United States. In 1992, Canada produced 29,500 metric tons of salmon with an estimated Canadian value of $200 million. British Columbia accounted for about 66 percent of total Canadian production. The province of New Brunswick, on the Atlantic coast, is the other major Canadian producer of farmed salmon. The salmon farming industry in British Columbia has grown dramatically in recent years. There are now approximately 100 salmon farming operations in British Columbia, producing about three times as much salmon as Washington State. According to industry spokesmen in British Columbia, while domestic hatcheries supply most of the fertilized eggs needed by the province’s salmon farms, there is still room in the market for imported eggs. As the industry has expanded, British Columbian salmon farmers have shifted production from various native Pacific salmon species, such as Coho or Chinook, to Atlantic salmon. In fact, Atlantic salmon has become the preferred species for aquaculture production around the world because it is less vulnerable to certain pathogens and has a lower feed-to-body-weight ratio than Pacific salmon varieties. To ascertain Canada’s restrictions on imports of salmon eggs and smolts into British Columbia, we reviewed the various policies implemented since 1985 and clarified key elements of these policies with officials from DFO; the British Columbian Ministry of Environment, Lands and Parks; and the Ministry of Agriculture, Fisheries and Food. To gain an understanding of the rationale for these policies, we obtained and reviewed documents provided by DFO, and we discussed the basis for these policies with DFO and British Columbian officials. To obtain industry views concerning the implementation and impact of the Canadian policy, we met with representatives of the British Columbia Salmon Farmers Association and the Washington Fish Growers Association. We also discussed the basis for the Canadian requirements with officials from the U.S. Fish and Wildlife Service, the National Marine Fisheries Service, and the Department of Agriculture’s Office of Aquaculture as well as its Animal and Plant Health Inspection Service. We also discussed technical aspects of the Canadian policy with researchers from the University of Washington’s School of Fisheries, the National Biological Survey, and Washington State’s Department of Fisheries. In addition, we met with officials from the Office of the U.S. Trade Representative to obtain their views regarding the international trade ramifications of the Canadian policy. To determine what opportunities existed for U.S. producers to increase exports of salmon eggs and smolts to British Columbia, we interviewed representatives of the Washington Fish Growers Association and major Washington State exporters of fertilized salmon eggs and smolts. In addition, we obtained the views of the British Columbia Salmon Farmers Association and of spokespersons for two leading British Columbian salmon producers. Because the Census Bureau does not collect data on exports of fertilized salmon eggs and smolts by using a distinct tariff code, we were unable to obtain official figures on the level of exports of these commodities. Nevertheless, by directly contacting major exporters in Washington State, we were able to obtain some data on exports of fertilized salmon eggs and smolts. However, because of the proprietary nature of these data and producers’ concerns about confidentiality, we were unable to report the level of exports to specific countries or regions, including British Columbia. Further, Washington State and British Columbian producers were unable to provide us with definitive data on the production costs of salmon eggs and smolts. In February and March 1995, we obtained oral comments from the director of DFO’s Aquaculture and Habitat Science Branch and the National Aquaculture Coordinator of the Department of the Interior’s Fish and Wildlife Service. Their comments are discussed at the end of this letter. We conducted our work from June 1994 to February 1995 in accordance with generally accepted government auditing standards. Since 1985, DFO, in coordination with the British Columbian Ministry of Environment, Lands and Parks, has required extended quarantine for imports of fertilized Atlantic salmon eggs. Further, DFO has banned all imports of Atlantic salmon smolts. This policy was established to protect the province’s valuable wild and cultured salmonid stocks from inadvertent contamination by pathogens that might be introduced with imported fish eggs or live fish. The policy was officially adopted in writing in 1987. The following were some major elements of the 1987 policy: Imports had to comply with the Canadian national fish health protection regulations. Imports were permitted only from facilities that had been approved or certified by a Canadian fish health officer appointed by DFO. Only fertilized eggs that had been surface disinfected in an iodine solution could be imported. No live fish (smolts) or unfertilized eggs were allowed. All Atlantic salmon eggs and resultant stock had to be held under strict quarantine for a minimum of 12 months. Shipments were limited to 300,000 eggs per year per import license. Eggs were allowed only from broodstock or parent fish that had been held at the source facility (hatchery), separate from other stocks, for one full generation. After March 31, 1989, no further imports of Atlantic salmon were to be permitted. Importers were required to hold a number of fish to maturity for reproduction purposes. In 1992, DFO revised its policy by relaxing the restrictions on imports of fertilized Atlantic salmon eggs. According to DFO officials, they decided to ease the original requirements because they had not detected any pathogens in tests of hatchlings from imported eggs since the policy had been put into effect. As shown in table 1, the revised policy, which is still in effect, (1) repealed the limit on the size of shipments of egg imports, (2) eliminated the prohibition on imports of Atlantic salmon after March 31, 1989, and (3) reduced the period during which eggs had to be quarantined. DFO, however, did not lift its ban on imports of Atlantic salmon smolts. In explaining their rationale for establishing the current policy on imports of fertilized Atlantic salmon eggs and smolts into British Columbia, DFO officials cited examples of fish pathogens that had been transferred with shipments of live fish in other parts of the world. Specifically, they referred to two pathogens introduced into Norway during the mid-1980s.DFO officials noted that there are numerous reports in the scientific literature of pathogens identified in various parts of the world, including areas of the United States, that have not been found in salmonid populations in British Columbia. They maintained that the current policy was justified in order to prevent the introduction of such pathogens into the province, particularly since Atlantic salmon is a species that is not native to British Columbia. They argued that the policy was not intended to be a nontariff barrier to imports; they pointed out that the policy was applied impartially to imports of Atlantic salmon eggs and smolts from any source outside British Columbia, including other Canadian provinces. Spokespersons for associations representing producers in both Washington State and British Columbia challenged the need for the costly, prolonged quarantine requirement for fertilized eggs, given DFO’s strict rules for certifying hatchery facilities that can export to Canada. As noted earlier, under Canadian fish health protection regulations, such facilities must be certified to be disease free after four consecutive inspections over a period of 18 months. Certification must be obtained from an agent designated and authorized by DFO. Test results from hatchlings of imported fertilized eggs in British Columbia provide an indication of the effectiveness of DFO’s strict certification requirement. According to DFO’s own data, in 9 years of testing, no pathogens have been found among hatchlings from imported fertilized eggs. A spokesman for the British Columbia Salmon Farmers Association, which has an interest in preventing the introduction of pathogens into the province, stated that raising hatchlings from imported eggs in isolation rather than under strict quarantine conditions would be sufficient to minimize the risk of inadvertent introduction of exotic pathogens. He noted that imported fertilized eggs would be more competitive with domestically produced eggs if hatchlings did not have to be raised under quarantine conditions. He explained that the quarantine process is very costly because DFO’s quarantine protocol calls for treating runoff from facilities where imported hatchlings are raised, to eliminate potential contaminants before the runoff can be discharged into the ground. (He noted that the fresh water phase of salmon farming operations generates considerable runoff.) If the hatchlings of imported eggs were simply placed in isolation, he pointed out, they would be raised in separate containers from domestic hatchlings and monitored until they were placed in the salt water pens, and the runoff would not have to be treated. Questions about DFO’s total ban on smolt imports to British Columbia centered on whether smolts from Washington State should be exempt from the ban, given the fact that the contiguous coastal waters off the Pacific Northwest constitute a single watershed. According to various academic, industry, and U.S. government experts, it is highly unlikely that pathogens found in coastal waters on one side of the border would not be present on the other side, because wild Pacific salmon from river systems that drain into these waters migrate north and south along the coast. The experts noted that wild salmon, which are vulnerable to the same pathogens as farmed salmon, swim past the salt water net pens where the farmed salmon are kept. They argued that transporting salmon smolts for aquaculture purposes from coastal waters off Washington State to coastal waters off British Columbia would not impose an additional risk of introducing exotic pathogens because existing wild salmon populations migrate from Washington past the coast of British Columbia, and vice versa. A representative from the British Columbia Salmon Farmers Association told us that his organization would not oppose imports of smolts from Washington State as long as the smolts were transported in salt water containers and placed directly into salt water pens. Similarly, spokespersons for the Washington Fish Growers Association told us that Canadian authorities need to recognize that the waters off Washington State and British Columbia constitute a common watershed. In their view, DFO officials should consider allowing Washington State producers that comply with Canadian fish health protection regulations to export to British Columbia. They pointed out that currently DFO allows producers from the state of Maine that comply with these regulations to export Atlantic salmon smolts to the neighboring Canadian province of New Brunswick. Canadian federal and provincial officials in British Columbia told us that conditions in British Columbia are not comparable to those in New Brunswick because Atlantic salmon is not native to the Pacific Northwest. They argued that it would not be appropriate for Washington State producers that comply with Canadian fish health protection regulations to be allowed to export to British Columbia because Atlantic salmon is an “exotic” species in the Pacific Northwest. They expressed concern about the possibility that Atlantic salmon that escape from aquaculture facilities might eventually establish wild populations that would compete with the native Pacific salmon species. On the other hand, Washington State producers told us that it is unfair to restrict imports of Atlantic salmon from Washington State on the basis that Atlantic salmon is an “exotic” species in British Columbia, since the province already has large farmed Atlantic salmon populations. Finally, industry spokesmen, U.S. state and federal officials, and academicians we interviewed argued that DFO officials should have conducted a comprehensive risk analysis before adopting the strict sanitary measures called for in the Canadian policy. DFO officials told us that the policy is based on an accumulation of information on disease distribution over many years, including data on the occurrence of pathogens in the United States and British Columbia. As noted earlier, DFO officials also cited examples of fish pathogens that have been transferred with shipments of live fish in other parts of the world. However, an official with the U.S. Fish and Wildlife Service and various academic experts contended that Canadian authorities should undertake a risk assessment appropriate to the unique circumstances in the Pacific Northwest. They argued that Canadian sanitary measures should also take into consideration such factors as geography, ecosystems, and the effectiveness of sanitary controls in Washington State. When DFO’s policy on imports of Atlantic salmon eggs and smolts was established in 1985, the commercial salmon farming industry in British Columbia was developing into an international business, and the market for eggs and smolts was beginning to expand. According to representatives of the Washington Fish Growers Association, Canada’s import restrictions effectively precluded most U.S. producers of salmon eggs and smolts from entering the British Columbian market. Thus, there is no way to determine what share of the market Washington State producers of eggs and smolts might have been able to capture if they had been able to compete in the British Columbian market. Nevertheless, spokespersons for the Washington Fish Growers Association and major Washington State exporters agreed that DFO’s restrictions on imports of salmon eggs and smolts have resulted in a loss of market opportunities for them in British Columbia. According to Association representatives, DFO’s policy has discouraged most Washington State producers from exploring British Columbia as a market, while other companies that tried to export in the past have given up. Major salmon egg exporters from Washington State agreed that there would be great market potential for their Atlantic salmon eggs in British Columbia if existing import restrictions were removed. Similarly, these exporters believed that there would be moderate to great market potential for Atlantic salmon smolts in the province if the ban on them were lifted. Representatives of the Washington Fish Growers Association pointed out that, because of its proximity and the large size of its salmon farming industry, British Columbia represents a natural market for their salmon eggs and smolts. They noted that, although they have been able to develop markets in other areas of the world, such as Chile and Japan, only a small percentage of their exports goes to British Columbia. While there are no exact figures available on exports of salmon eggs and smolts worldwide, Washington State exporters reported exporting approximately 47 million salmon eggs worldwide in 1993. Exports to British Columbia represented less than 10 percent of this figure. Salmon producers we interviewed in British Columbia also told us that there is a market in the province for imports of Atlantic salmon eggs and smolts from Washington State. Their comments echoed the findings of a September 1990 report on British Columbia’s Atlantic salmon farming industry commissioned by DFO. In that report, the availability of more and better quality Atlantic salmon eggs was cited as one of the industry’s highest priorities. The report noted the poor quality of the Atlantic salmon strains in British Columbia and predicted that, unless import requirements for salmon eggs in the province were simplified, British Columbian salmon farmers would find it increasingly difficult to compete with producers from other parts of the world. One British Columbian producer told us of the excellent quality of Atlantic salmon eggs he had imported from Washington State, and he indicated he would like to purchase more eggs at comparable quality and cost. He explained that the expense associated with quarantining imported eggs effectively discouraged expanding imports from Washington State. In February 1995, we provided relevant portions of this report to the director of DFO’s Aquaculture and Habitat Science Branch, and she provided some technical clarifications that we incorporated where appropriate. In addition, on March 6, 1995, we discussed the contents of this report with the National Aquaculture Coordinator of the Department of the Interior’s Fish and Wildlife Service. He agreed with the contents of our report and offered a few clarifying comments, which we have incorporated where appropriate. We are sending copies of this report to the Secretaries of Agriculture, Commerce, State, and the Interior and to the U.S. Trade Representative. We will also make copies available to other interested parties upon request. Major contributors to this report were Elizabeth Sirois, Assistant Director; Juan Gobel, Project Manager; and Larry Thomas, Evaluator-In-Charge. Please call me at (202) 512-4823 if you have any questions concerning this report. Allan I. Mendelowitz, Managing Director International Trade, Finance, and Competitiveness The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on Canada's policy regarding the import of fertilized salmon eggs and smolts, focusing on: (1) the reasons Canada implemented its policy; (2) whether concerned parties view Canada's policy as reasonable; and (3) whether opportunities exist for U.S. producers to increase their salmon egg and smolt exports. GAO found that: (1) since 1985, Canada's Department of Fisheries and Oceans (DFO) has maintained a policy that requires a quarantine of imports of fertilized Atlantic salmon eggs and bans imports of Atlantic salmon smolts into British Columbia; (2) hatchery facilities must be certified by DFO that they are free of certain diseases over a period of 18 months in order for them to export fertilized salmon eggs; (3) according to DFO officials, the salmon policy was developed to protect British Columbia's valuable fishery resources from pathogens that could be inadvertently destructive; (4) Atlantic salmon is the principal salmon species used in worldwide aquaculture production and accounts for about 60 percent of farmed salmon production in British Columbia; (5) salmon hatchery producers in the United States and Canada have questioned whether the Canadian policy is necessary, since in 9 years of testing, none of the hatchlings from imported eggs have been found to carry pathogens; (6) Washington state and federal officials have questioned the need for a ban on U.S. hatchery smolts, since both British Columbian and Washington salmon share the same watershed and have equal chances of acquiring pathogens; and (7) if Canada's import restrictions were relaxed, there would be great market potential for U.S. hatchery salmon eggs and smolts.
|
Over the last 2 decades, the number of school-aged children with limited English proficiency in the nation has grown dramatically, increasing from less than 1 million in 1980 to more than 3.5 million in 1998. Despite small rates of growth in the total enrollment of all K-12 children, the enrollment of school-aged children with limited English proficiency across the United States grew exponentially between school years 1989-90 and 1997-98 (see fig. 1). While California, Florida, New York, and Texas continue to have the largest number of children with limited English proficiency (see fig. 2), other states that previously had small populations of such children have experienced large increases in recent years. For example, in Alabama, Idaho, Nebraska, Nevada, North Carolina, and Tennessee, the number of children with limited English proficiency more than doubled between school years 1992-93 and 1997-98 (see fig. 3). In 1968, the Congress passed the Bilingual Education Act (BEA). The purpose of the BEA is to educate students with limited English proficiency so that they can reach the academic standards expected of all students. The 1994 reauthorization of BEA created the four bilingual education grant programs—Program Development and Implementation Grants (PDI), Program Enhancement Projects (Enhancement), Comprehensive School Grants (Comprehensive), and Systemwide Improvement Grants (Systemwide)—to distribute funds directly to school districts serving children with limited English proficiency. These are the only federal programs that specifically target instructional services to children with limited English proficiency. In addition to the four federally funded bilingual education programs authorized by the BEA, other federal programs also address the special needs of these children though they do not exclusively target this population. For example, Title I of the Elementary and Secondary Education Act, which gave $8.7 billion in fiscal year 2000 to assist school districts educating disadvantaged students, is the largest federal program that includes support for children with limited English proficiency.However, most services for children with limited English proficiency are funded with local and state—not federal—dollars. Education’s Office of Bilingual Education and Minority Languages Affairs (OBEMLA) administers the four competitive bilingual education grant programs. The cost of administering these programs is funded through Education’s program administration account, while funding for the program grants is included in OBEMLA’s program budget. The bilingual education programs do not receive separate appropriations from the Congress; rather, OBEMLA receives a single budget appropriation to fund programs authorized by the BEA. (See app. I for a listing of all programs funded by the single budget appropriation.) During the grant competition cycle (approximately 4 to 6 months long), application forms are reviewed and scored based on applicants’ responses to the selection criteria (see app. II). The applications for the four programs are very similar and are organized into two main sections. The first section requests such information as a proposed summary budget, a detailed itemization of proposed annual expenses, and student data including the language groups and number of both limited English and English-proficient students to be served. The second section, the bulk of the application, is a narrative in which applicants describe the proposed project by demonstrating how it meets the selection criteria established by Education. Although the application forms and the selection criteria for all four programs are very similar, school districts and schools use the application to describe projects tailored to their specific local needs. School districts may submit applications to receive funding from more than one of the programs. At the end of the grant competition cycle, Education ranks the applications and awards funding to grantees. OBEMLA’s management plan contains safeguards to prevent individual schools from receiving funding from more than one bilingual education program. In fiscal year 2000, Education funded approximately 28 percent of the 665 applications it received. According to OBEMLA staff, the following number of grants were awarded in fiscal year 2000 to school districts to serve children with limited English proficiency: 18 Systemwide grants averaging $551,000 each; 75 Comprehensive grants averaging $245,300 each; and 92 PDI grants averaging $156,200 each. No Enhancement grants were awarded in fiscal year 2000. In coming years, Education plans to award a greater proportion of the grants to schools in the early stages of developing and implementing new programs. Congressional interest in the BEA has centered on the appropriate federal role in meeting the special needs of children with limited English proficiency. The 107th Congress is considering several bills as it deliberates BEA reauthorization in fiscal year 2001. One bill recommends the elimination of the four grant programs and another seeks to significantly increase funding for bilingual education programs and consolidate the four programs into a single grant program. The President’s budget proposes to implement changes in bilingual and immigrant education that would consolidate all currently funded bilingual and immigrant programs, as well as the Foreign Language Assistance program, into a single flexible performance-based state grant program. All four federal bilingual education programs share the same performance goals and measures, possess similar eligibility criteria, and allow similar uses of program funds (see table 1). The four programs target students with limited English proficiency in kindergarten through 12th grade. The overall objectives of these four programs are to provide bilingual or special alternative education programs to children with limited English proficiency and to help such children reach high academic standards. Under each program, students’ achievement is measured biannually to determine if they have demonstrated continuous progress in oral and written English, as well as in language arts, reading, and math. LEAs are eligible to apply for funding under the four bilingual education programs; however, only LEAs with high concentrations of such students are eligible to apply for grants from the Comprehensive and Systemwide programs. LEAs may collaborate on their grant applications with institutions of higher education, community-based organizations, and state education agencies. All four programs also permit the use of funds to provide instructional services and materials, professional staff development for teachers and teacher’s aides, and family education programs. The PDI and Enhancement programs require specific uses of funds; the Comprehensive and Systemwide programs permit funds to be used on services from any of the above broad categories. Only the Systemwide program specifically authorizes services at the school district level, such as those associated with grade promotion and graduation requirements. All school districts and schools receiving funds must coordinate with other relevant programs and services to meet the full range of needs of participating students. The legislative purpose and grant length of the four bilingual education programs also vary. For example, PDI grants are to be used to develop and implement new bilingual education programs. According to Education officials, school districts typically submit applications to the PDI program if the population they intend to serve is new to a community and the students are relatively close in age. The purpose of the Enhancement program, according to the legislation, is to expand existing bilingual education programs. In practice, however, differences between the PDI and Enhancement programs have not been apparent to grantees. Education officials said that the types of programs described in the applications submitted by some school districts are the same for both the PDI and Enhancement programs. School districts typically submit applications to the Comprehensive program if the students they intend to serve are concentrated in one school but are disbursed throughout several grades. School districts typically submit applications to the Systemwide program if students with limited English proficiency of all ages attend schools throughout the district. Both the PDI and Enhancement programs make what are considered short-term grants because they provide funding for 2 to 3 years. Both the Comprehensive and the Systemwide program grants provide funding for 5 years. OBEMLA officials awarded grants to school districts with similar characteristics that provided similar services; however, individual schools typically did not receive funding from more than one bilingual education program. Our review of grantee files confirmed Education officials’ estimate that 80 percent of grants funded projects in elementary schools, and approximately 70 percent of the children served by the programs spoke Spanish as their primary language. A majority of grants funded in fiscal year 2000 went to school districts in states with historically high concentrations of students with limited English proficiency (see fig. 4). However, according to agency officials, Education has begun to award an increasing number of grants to school districts in states that until recently had small numbers of such students. According to Education officials, grantees receiving funding under each of the four programs provided similar services to their students with limited English proficiency. The services provided with program funds fell within three broad categories: instructional activities and materials, professional staff development for teachers and teacher’s aides, and family education programs. However, the precise nature of the services varied by district and school. For instance, some school districts chose an English-based instructional approach to teaching students with limited English proficiency, while others made more extensive use of the students’ native language (bilingual approach). Although schools receiving funds were similar in many respects, according to our file review, there is little evidence to indicate that individual schools received funding from more than one bilingual education program (see table 2). Even in instances where school districts received multiple grants, they were distributed so that individual schools typically did not receive funding from more than one program. On the basis of our file review and discussions with grantees and Education officials, we learned that while large school districts located in New York City and Los Angeles County were among the proportion (18 percent) of school districts receiving funding from more than one bilingual education program, individual grants were targeted to different schools within these large districts. The effectiveness of the four bilingual education programs on a national level is unknown because locally collected data are not comparable. The BEA requires local assessments of student outcomes, and leaves the choice of assessment tests to the local program. Although the legislation does not address how these evaluations are to be funded, grantees are required to submit evaluations every 2 years and can—according to Education officials—use grant funds for that purpose. Grantees use these evaluations to improve the local program, further define local program goals and objectives, and measure student outcomes such as academic achievement. To measure student academic achievement, the legislation specifies that local projects provide data on whether students with limited English proficiency are achieving state performance standards. For example, grantees must provide data comparing the academic achievement and school retention rates of students with limited English proficiency with those of English-proficient students. The legislation also requires data on program implementation and the relationship between activities funded by these programs and those funded by other sources. Because school districts use different assessment tests and define terms differently, student outcome data are not comparable among grantees, or nationally. While the BEA does not require grantees to use specific assessment tests, individual states or school districts may have such requirements. Grantees measure student academic achievement against different performance standards depending on, for instance, whether the standards were set at the state level or by a school district. Furthermore, many grantees have their own definitions and measures of key terms such as school retention. Education’s guidance states that because of the variation in how school retention is defined and measured, it is important that each local program follow its own school, district, or state definition and measure. One study prepared for Education found that it was difficult to aggregate data to provide a national picture of program effectiveness for these reasons, and also because of the variability in the quality and amount of data reported by school districts. However, Education may be able to garner some information about how well local bilingual education programs are meeting program goals by comparing local data with Education’s performance standards. Even if Education were able to obtain uniform data across local programs, it would still be difficult to isolate the effects of BEA funding. As mentioned earlier, funding from other federal programs—the largest of which is Title I—also supports these children. Moreover, state and local funds support most of the services provided to students with limited English proficiency. Because services provided to students with limited English proficiency are funded through multiple federal, state, and local sources, it would be difficult to isolate the effects of the four bilingual education program funds from other funding effects. Because all four bilingual education programs share the same goals, target the same types of children, and provide similar services, these programs lend themselves to consolidation. Though federal cost savings would likely be small, program consolidation would allow Education to redirect some of the resources it uses to manage four separate grant competitions to accomplish other activities, such as conducting site visits, reviewing and evaluating specific aspects of a grantee’s activities, and providing technical assistance. Program consolidation may also reduce applicant burden associated with multiple federal programs designed to achieve the same overall objectives. Education officials believe that consolidating these programs has merit and have already taken some steps to reduce overlap among the four programs. For example, because of similarities between the PDI and Enhancement programs cited by grantees and OBEMLA staff, Education holds grant competitions for these programs on alternating years (except in fiscal year 1999) (see table 3). Although reducing the number of programs for students with limited English proficiency requires congressional action, Education already decides which of the four programs to fund in a particular fiscal year and at what level to fund them. Given the inefficiencies associated with program overlap, the Congress may want to consider consolidating the four bilingual education programs into one program. While opportunities exist for consolidating the four bilingual education programs, federal cost savings, if any, from this action would likely be small for two reasons. First, the way programs are funded may limit any savings. As part of its annual budget request, Education proposes a funding level (as a single line item) for the four bilingual education programs. Because congressional appropriations are made as a single line item for the four programs, Education has the discretion to decide how to distribute the appropriated funds to the individual programs. Therefore, eliminating one or more of the programs would not necessarily change the funding level, which is proposed by Education’s budget request and determined by the Congress. Second, staff reductions are unlikely, thus limiting cost savings. Because the same 28 staff members administer all of OBEMLA’s programs (the four bilingual education programs we examined as well as 10 others), staff reductions could affect the management of all OBEMLA programs. Consolidating the four bilingual education programs may provide benefits other than cost savings to Education. According to OBEMLA officials, a reduction in the number of applications received—and possibly the number of grant competitions held—would allow staff to reallocate some of their time to other important program-related activities. Currently, OBEMLA holds a grant competition lasting approximately 4 to 6 months for each of the bilingual education programs awarded in a given year. According to OBEMLA staff, approximately 10 grant competitions are held for the bilingual education and other OBEMLA programs each year. This process consumes significant staff resources. OBEMLA officials also mentioned that some school districts submit grant applications to more than one bilingual education program in an effort to increase their chances of receiving funding from at least one, but OBEMLA does not maintain data on how widespread this practice is. According to Education officials, reducing the number of programs would likely decrease the number of grant applications received because school districts would be less likely to submit multiple grant applications. As a result, OBEMLA staff would spend less time reviewing applications and, possibly, less time conducting grant competitions. OBEMLA staff stated that, by spending less time reviewing applications and conducting grant competitions, they would have more time to effectively conduct other important activities such as visiting every grantee at least once during the course of its funding cycle, reviewing and evaluating specific aspects of a grantee’s activities, and providing technical assistance. Furthermore, as part of its efforts to provide technical assistance, Education officials might have more time to identify and disseminate information on effective practices gathered from grantees that have been successful in meeting program goals. Education officials also believe that time saved as a result of consolidation may allow for a greater emphasis on building collaborations between grantees and the other programs providing support to children with limited English proficiency. Consolidation may also directly benefit grantees applying to more than one of the bilingual education programs by reducing the burden associated with applying to multiple federal programs designed to achieve the same overall objectives. Several grantees we interviewed said that the application process was time consuming. According to the Office of Management and Budget, each application takes from 80 hours (PDI and Enhancement applications) to 120 hours (Comprehensive and Systemwide applications) to complete. Grantees we spoke with estimated that they spent anywhere from 6 days to 6 weeks completing applications. Furthermore, according to Education officials, grantee applications submitted to the PDI and Enhancement programs often proposed using the grants to fund the same types of activities. Given that applications for funding from the four bilingual education programs we reviewed require extensive time and effort to prepare, reducing the number of programs may decrease the administrative burden experienced by school districts applying for multiple program grants. OBEMLA staff believes that the four bilingual education programs meet two funding priorities for students with limited English proficiency. The first priority is to help school districts and schools that have experience serving students with limited English proficiency, and the second is to help those with little experience serving such students. At present, the Comprehensive and Systemwide programs focus on the first priority by meeting the needs of grantees that are upgrading existing programs, and the PDI and Enhancement programs meet the second priority by awarding grants to educate new populations of limited English-proficient students. Education officials recognize that four bilingual education programs are not necessary to meet the needs of school districts serving students with limited English proficiency. Education has taken steps to reduce redundancy by not awarding new grants under all four programs every year. During the 6-year period between 1995 (when the programs were first funded) and 2000, Education held grant competitions for all four bilingual education programs in only 1 year. Staff members acknowledged that given enough flexibility to meet a variety of funding priorities, they may be able to serve all grantees with one program. The four federal bilingual education programs included in this review overlap in many significant ways, and our current and past work has shown that overlap can create an environment in which programs do not serve participants as efficiently as possible. Education officials and some grantees recognize that fewer than four programs could meet the needs of schools educating students with limited English proficiency. We believe it would be possible for a single federal program to address the agency’s funding priorities if the program has adequate flexibility. To decrease the overlap caused by four bilingual education programs that were designed to achieve the same overall objectives, the Congress may want to consider program consolidation. The Congress could authorize a single federal program that consolidates all four bilingual education programs into one but provides Education with the flexibility to meet the varied needs of school districts serving students with limited English proficiency. Such a program would focus on grantees with experience educating students with limited English proficiency as well as those grantees with little experience in this area. We provided a draft of this report to the Department of Education for comment and we received written comments, which are included in appendix III. Since the discussions we had with program staff during our review, Education has decided that it supports consolidating the four programs into one, which is consistent with the President’s budget proposal. Thus, we have revised the report to reflect Education’s position, which also supports the consolidation of the four programs suggested in our Matter for Congressional Consideration. However, our review did not address whether the federal government or states should administer the program, and Education officials did not discuss this topic with us during our review. In addition, we received technical comments from Education and incorporated these comments where appropriate. We are sending copies of this report to the Honorable Roderick R. Paige, Secretary of Education; relevant congressional committees; and other interested parties. We will also make copies available to others on request. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix IV. During grant competitions, a group of peer reviewers rates applications for each of the four bilingual education programs using the following selection criteria. These criteria help reviewers assess the strength of individual applications. Reviewers assign numerical scores and rank the applications to determine those that merit grant awards. The selection criteria are similar across all four programs. Selection criterion Meeting purpose of statute Extent of need for project Quality of project design Quality of project services Proficiency in English and another language Language skills of personnel Project activities Quality of project personnel Adequacy of resources Quality of management plan Integration of project funds Quality of project evaluation plan Commitment and capacity building *Program uses indicated selection criteria. In addition to those named above, the following individuals made important contributions to this report: Sherri Doughty, Ellen Habenicht, Corinna Nicolaou, James Rebbe, Jay Smale, and Jim Wright.
|
In fiscal year 2000, the federal government funded four bilingual education programs--Program Development and Implementation Grants, Program Enhancement Projects, Comprehensive School Grants, and Systemwide Improvement Grants--that award grants to school districts to serve children with limited English proficiency. This report reviews (1) how similar the performance goals and measures, eligibility criteria, and allowable services are among the four bilingual education programs; (2) to what extent the different kinds of grants were made to the same types of schools or school districts and were used to provide the same services; (3) what is known about these programs' effectiveness; and (4) whether these programs can be better coordinated or if opportunities exist for program coordination and cost savings. GAO found that all four federal bilingual education programs share the same performance goals and measures, use similar eligibility criteria, and allow for similar uses of program funds. In fiscal year 2000, the four bilingual programs made grants to school districts that shared some characteristics and provided similar services; however, individual schools typically did not receive funding from more than one program. The services provided with program funds are similar, but are tailored by school districts and schools to meet local needs. Currently, the effectiveness of the four bilingual programs on a national level is not known. The authorizing legislation requires the use of local evaluations to assess students' progress in meeting state standards. The variation in local assessment tests complicates the task of providing a national picture of program effectiveness. Even if the Department of Education were able to obtain uniform information on local projects, it faces challenges in trying to isolate the funding effects of the four bilingual programs from funding effects of other programs that support students with limited English proficiency. Finally, these four bilingual programs lend themselves to consolidation. Although cost savings from consolidation would likely be small, there may be advantages to consolidation, such as freeing up staff for other important activities and reducing the administrative burden associated with redundant federal programs.
|
Generally, Medicare covers SNF stays for patients needing skilled nursing and therapy for conditions related to a hospital stay of at least 3 consecutive calendar days, if the hospital discharge occurred no more than 30 days prior to admission to the SNF. For qualified beneficiaries, Medicare will pay for medically necessary services, including room and board, nursing care, and ancillary services such as drugs, laboratory tests, and physical therapy, for up to 100 days per spell of illness. For more than a decade beginning in 1986, Medicare SNF spending rose dramatically—averaging 30 percent annually. During this period, Medicare payments to each SNF were based on the costs incurred by the SNF in serving its Medicare patients. There was minimal program oversight, providing few checks on spending growth. Although Medicare imposed payment limits for routine services, such as room and board, it did not limit payments for capital and ancillary services, such as therapy. Cost increases for ancillary services averaged 19 percent per year from 1992 through 1995, compared to a 6 percent average increase for routine service costs. To curb the rise in Medicare SNF spending, BBA required a change in Medicare’s payment method. HCFA began phasing in the SNF PPS on July 1, 1998. Under PPS, SNFs are paid a prospectively determined rate intended to cover most services provided to a patient during each day of a Medicare-covered SNF stay. The SNF payment rate is based on the 1995 national average cost per day, updated for inflation. Because the costs of treating patients vary with their clinical conditions and treatments, daily payments for each patient are adjusted for the patient’s expected care needs depending on the patient’s assignment into one of 44 different payment groups, also called resource utilization groups (RUG). A RUG describes patients with similar therapy, nursing, and special care needs and has a corresponding payment rate. The RUG classification system is hierarchical. The first distinction made is whether the patient has received (or is expected to receive) at least 45 minutes a week of therapy (see fig. 1). For these rehabilitation patients, further divisions—into ultra high, very high, high, medium, and low therapy categories—are made based on the total minutes and type of physical, occupational, and speech therapy provided over 7 days. Each of these categories is defined by a range of therapy minutes and the type of therapy provided. For example, patients in the very high category receive between 500 and 719 minutes of therapy over 7 days. Each category is further subdivided into RUGs, based on a patient’s dependency in performing ADLs, such as eating, transferring from a bed to a chair, or using the toilet. There are 14 rehabilitation RUGs, which account for three- fourths of Medicare-covered stays. Among patients who have not received (or are not expected to receive) 45 minutes a week of therapy, the system distinguishes between patients requiring extensive or special care or who are clinically complex (12 RUGs) and those receiving custodial care (18 RUGs). The classification system uses specific medical conditions (such as having multiple sclerosis or being comatose) and special care needs (such as requiring tracheostomy care or ventilator support) within the past 14 days to group patients into extensive services, special care, and clinically complex categories. Patient characteristics such as the ability to perform ADLs, signs of depression, and conditions requiring more technical clinical knowledge and skills are used to assign patients into RUGs within these categories. Since 1991, SNFs have carried out a requirement to periodically assess and plan for residents’ care using the MDS, which documents 17 aspects of a patient’s clinical condition, including the amount of therapy provided or planned, diagnoses, certain care needs, and the ability to perform ADLs at the patient’s most dependent state. In addition to determining Medicare payments, these data are used to measure patient needs, develop a plan of care, and monitor the quality of care. To gather the MDS, an in-house interdisciplinary team assesses each patient’s clinical condition at established intervals throughout the patient’s stay. The Medicare assessment schedule requires that the initial assessment be performed during days 1 through 5 of a patient’s stay, but may be performed as late as days 6 through 8, termed “grace days,” which give staff additional flexibility in conducting the assessments. The initial assessment is used to assign patients to a RUG that establishes payments for the first 14 days of care. For patients staying longer than 14 days, a second assessment must be conducted during days 11 through 14 that determines the RUG assignment and payment rate for days 15 through 30 of the patient’s stay. An additional assessment is performed prior to the 30th day of care and every 30 days thereafter; each of these assessments establishes the payment for the next 30 days up to the 100th day. SNFs can classify patients primarily needing therapy into the high, medium, or low rehabilitation payment group categories for the initial assessment using either actual minutes of therapy provided or an estimate of the amount that will be provided over the 2 weeks covered by the initial assessment. If a patient is classified into one of these rehabilitation categories using an estimate, but actually receives less than the amount of therapy to qualify into that category, payments to the SNF for the initial assessment period are not reduced. To classify patients into the very high or ultra high payment group categories on the initial assessment, SNFs must have already provided the minimum amount of therapy that defines these categories when the assessment is done. The accuracy and completeness of the patient assessment information are critical to ensure appropriate categorization of patients into payment groups. For example, to distinguish between different levels of assistance required in performing ADLs, a SNF needs to document how often and how much assistance was provided to a patient during the past 7 days. For a patient receiving over 720 minutes of therapy a week (the ultra high rehabilitation category), the difference between assessing a patient as needing “extensive” versus “limited” assistance in performing one ADL, such as eating, may result in an additional payment of up to $48 per day to the SNF. (See app. II for a comparison of ADLs and payment rates for each RUG.) Thus, a SNF might respond to the PPS by increasing the resources devoted to completing the MDS. This possible SNF response to the new payment system may be similar to how hospitals responded to the inpatient hospital PPS. Under the inpatient PPS, hospitals are paid a prospectively determined rate per patient stay, which is adjusted for expected resource needs based on factors such as patient diagnoses and treatment. After the implementation of the inpatient PPS in 1983, hospitals expanded the number of diagnoses they reported to describe patients. These changes in documentation resulted in some patients being classified into higher payment categories, which increased hospital payments. A SNF also has an incentive to change the amount of care provided to minimize its costs and maximize its payments. Because the amount of therapy provided is key to classifying the majority of patients into RUGs, a SNF benefits when it provides an amount of therapy on the low end of the range of therapy minutes associated with that RUG. For example, furnishing 1 additional minute of therapy a week could move a patient from the very high to the ultra high category. The SNF would receive an additional $63 or $99 more per day, depending on the patient’s ADL needs, but there may not have been a proportionate increase in costs. To ensure that its patients are grouped into the highest possible payment groups, a SNF may adjust the timing of its initial patient assessments. Grace days are intended to give SNFs the flexibility to delay care until patients are ready to receive therapy, while ensuring that payments reflect the treatment levels that are provided to the patient. SNFs may opt to use grace days when conducting the initial assessment of patients who may be grouped into the payment group categories that require actual minutes of therapy (ultra and very high rehabilitation). Otherwise, if initial assessments are done before the grace days, patients may not have received enough therapy to reach the weekly threshold for placement into one of these categories. Since the implementation of the SNF PPS, some nursing home chains have claimed that payments are inadequate and that this has caused their financial condition to erode. We have reported that total SNF PPS payments are likely to be adequate and may be excessive given that the payment rates include the costs of inefficient delivery, unnecessary care, and improper billings. But the Medicare Payment Advisory Commission and we have raised concerns that the payment rates for certain types of patients may be inadequate because the patient classification system may not appropriately reflect the differing needs of patients who require multiple kinds of health care services, such as extensive or special care, rehabilitative therapy, and ancillary services. We have also expressed concern that the use of therapy minutes provided to patients as a way to classify patients might encourage the provision of unnecessary services. In response to concerns about the overall adequacy of Medicare payments and their distribution across different types of patients, the Congress has raised payments twice since the PPS implementation. These actions increased payments across-the-board for all RUGs and, in addition, for certain RUGs. The Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 (BBRA) temporarily increased Medicare’s payments for all RUGs by 4 percent, beginning in fiscal year 2001 through the end of fiscal year 2002. In addition, BBRA increased payments for 15 RUGs (3 rehabilitation RUGs and all extensive services, special care, and clinically complex RUGs) by 20 percent beginning in April 2000. The Congress intended this increase to be temporary—until refinements to the RUGs patient classification system were implemented. However, refinements have not been implemented and the Congress again revised the payment rates. The Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) temporarily increased the portion of the payment related to nursing costs by 16.66 percent for all payment groups, which raised the overall payment rates from 4 to 12 percent, depending on the RUG, beginning April 1, 2001, through September 30, 2002. In addition, BIPA replaced the 20 percent BBRA increase that applied to 3 out of the 14 rehabilitation RUGs with a 6.7 percent increase for all rehabilitation RUGs. CMS has also responded to concerns about PPS. In July 2001, CMS awarded a contract to determine the feasibility of refinements to PPS, including alternatives to the RUGs patient classification system. To date, this contract has not resulted in proposed refinements to the RUGs system and the contractor’s preliminary report is not due until fall 2004. CMS has also supported work to assess and verify the MDS data that underlie PPS. However, we recently reported that CMS’s proposed on-site and off-site review of MDS assessments may not be sufficient to ensure the accuracy of MDS assessments in most nursing homes or to systematically evaluate the performance of state efforts to do so. In September 2001, CMS awarded a contract to determine if there are differences between the documentation of patient care needs and actual patient care needs and to detect irregularities in MDS assessments. The contractor began these data monitoring activities in the spring of 2002, which include checking that the RUGs reported on the Medicare claims match those on the MDS assessments and examining the distribution of patients across the payment groups. Among patients primarily receiving rehabilitation care, more were classified at their initial assessment into moderate rehabilitation payment group categories and fewer into the intensive and low rehabilitation categories since the implementation of PPS. Providers reported that the payments for the moderate rehabilitation payment groups were more favorable, relative to their costs, than other payment groups. Further, the share of patients initially classified into the rehabilitation RUGs whose payments were increased by BBRA provisions grew, while the share of patients initially classified into most of the other payment groups declined or stayed the same. Across patients initially assigned to the extensive, special care, or clinically complex categories, more were classified as requiring extensive services—the highest paying category—and fewer into the special care or clinically complex categories. SNFs changed two patient assessment practices that could have contributed to these shifts in patients’ initial payment group assignments. First, SNFs increased their use of estimated—rather than actual—therapy minutes to assign patients to rehabilitation categories. Second, SNFs assessed patients later in their stays, making it more likely that they received more therapy and therefore would be classified into categories with higher payments. Although the proportion of SNF Medicare patients initially classified into rehabilitation payment group categories remained the same overall, the distribution of patients within these categories changed considerably from first quarter 1999 to first quarter 2001 (see table 1). By 2001, more Medicare patients receiving therapy were initially classified into the two moderate rehabilitation categories—medium (16 percent more) and high (17 percent more), which made up about two-thirds of Medicare SNF admissions. The share of patients initially classified into ultra high—the most intensive rehabilitation category—decreased to comprise just 3 percent of all Medicare SNF patients at their initial assessment in 2001. This shift is consistent with the industry’s assertions that the high and medium categories have more favorable payments, relative to their costs, than other categories. We do not know if this shift reflects a change in the care needs of patients from 1999 to 2001. Some of the shifts in the distribution across individual rehabilitation RUGs paralleled changes in payment rates made by the Congress. Within the high and medium rehabilitation payment group categories, the shares of patients initially classified into RUGs that received congressionally mandated payment increases in 2000 grew substantially more than the shares of patients classified into rehabilitation RUGs that did not (see table 2). For 8 of the 11 rehabilitation RUGs without this special increase, the shares of patients at their initial assessment declined and only one experienced an increase. Among the patients initially classified into the extensive and special care or clinically complex categories (all of which were increased 20 percent by BBRA), the share of patients initially assessed as requiring the most intensive care—those in the extensive services category—increased to become about two-thirds of patients in these categories, while the share of patients in the special care and clinically complex categories decreased. Since the introduction of PPS, changes in SNF patient assessment practices have made it easier to classify patients into some categories with higher payments. When performing their initial patient assessments, SNFs have increasingly opted to use estimates of the amount of therapy they expect to provide (rather than actual therapy given during the first week of care) to categorize patients into the high, medium, and low therapy categories for the first 14 days of care. Because payments are based on these estimates, payments for some patients were higher than they would have been if the payments were based on actual therapy provision. Comparing the first quarters of 1999 and 2001, the practice of using estimated therapy minutes, rather than actual therapy provided, to classify patients into therapy categories increased more than 35 percent, becoming the mechanism for classifying nearly two-thirds of all patients in high, medium, and low rehabilitation categories. Of the patients who could be evaluated, one quarter of the patients classified using estimated minutes of therapy did not receive the amount of therapy they were assessed as needing, while three-quarters eventually did. SNFs increasingly performed initial patient assessments later in patient stays, during the grace days, for patients in the highest paying therapy categories—ultra high and very high. Because classification into these categories is based on the actual amount of care provided, conducting the patient assessments during the grace days allows additional time for more therapy services to be provided, making it likelier that patients would be classified into the ultra high and very high categories. To classify patients into these categories, the use of grace days increased more than 40 percent from the first quarter of 1999 to the first quarter of 2001. In the 2 years following the implementation of PPS, SNFs provided less therapy to almost two-thirds of all Medicare SNF patients—those in the medium and high rehabilitation payment group categories. The typical patient in these categories received 22 percent less therapy, at least 30 fewer minutes, per week during the initial assessment period between the first quarters of 1999 and 2001. Indeed, in 2001 half of the patients initially categorized in these two groups did not actually receive the amount of therapy required to be classified into those groups, due in part to the use of estimated therapy minutes for classification (see table 3). Further, during their initial assessment period, fewer patients received therapy near the higher end of the range that defines each category. For example, to be assigned to the high rehabilitation category, patients are assessed as needing between 325 and 499 minutes of therapy a week. In 1999, 20 percent of patients in the high rehabilitation payment group category received 390 minutes or more of therapy per week during their initial assessment period. Two years later, less than 13 percent received this much therapy. In 1999, 5 percent of patients initially assessed in the high rehabilitation payment group category received 480 minutes or more of therapy per week. Two years later, only 2 percent of patients received this level of therapy. Across all therapy patients, the median amount of therapy provided during the initial assessment period also declined from 1999 through 2001. The declines in therapy service use and resultant reductions in costs were not uniform across the rehabilitation payment group categories. Consequently, payments for some categories of RUGs are likely to be higher than their service costs, compared to other categories of RUGs. For patients in the more intensive rehabilitation payment group categories, where estimated minutes cannot be used to classify patients, median therapy minutes did not decline. Our work indicates that SNFs have responded to PPS in two ways that may have affected how payments compare to SNF costs. SNFs have (1) changed their patient assessment practices and (2) reduced the amount of therapy services provided to Medicare beneficiaries. The first change can increase Medicare’s payments and the second can reduce a SNF’s costs. CMS’s ongoing efforts to refine the payment system are particularly important in light of these provider responses to the PPS. In its written comments on a draft of the report, CMS agreed that ongoing evaluations of PPS are important. CMS stated that our findings are generally consistent with its analyses and with its expectations regarding provider responses to the incentives of the PPS. CMS noted that it intends to examine whether therapy provided is consistent with payment levels and ADL coding accuracy through its program safeguard contractor project. CMS stated that reporting the percentage change of relatively small shares of patients across payment categories may overemphasize the changes and is somewhat misleading. However, the percentage changes reported in table 1 demonstrate that the shifts in shares of patients across payment categories are consistent with the industry’s assertions that high and medium categories have the most favorable payments, relative to costs. In addition, the percentage changes reported in table 2 demonstrate that the shifts among RUGs parallel the congressionally mandated payment increases. CMS also provided technical comments, which we incorporated as appropriate. CMS’s comments are in appendix III. We are sending copies of this report to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions, please call me at (202) 512-7114. Laura Sutton Elsberg, Leslie Gordon, and Walter Ochinko prepared this report under the direction of Carol Carter. We used data from the 1998 Medicare cost reports to identify SNFs that began participating in PPS on or before January 1, 1999. Facility ownership and other characteristics were taken from HCFA’s end-of-year Provider of Services file for 1999. We included in our analysis only those SNFs that had transitioned to PPS before or during January 1999, were active in 1999, and submitted Medicare MDS assessments in the three periods used in this study. This cohort comprised approximately 80 percent of all SNFs that filed a 1998 cost report and was representative of the universe of SNFs in terms of bed size, location (rural and urban), and ownership characteristics. For the SNFs in our sample, we analyzed data from the nursing home MDS national repository to compare differences in patient classification and therapy services across three points in time—early in PPS (January- March 1999), 1 year later (January-March 2000), and 2 years later (January- March 2001). Data to examine the distribution of Medicare patients after the implementation of BIPA-mandated changes (applied to services on or after April 1, 2001) were not available in time for this analysis. Our sample included over 350,000 MDS assessments for Medicare beneficiaries for each time period. To examine the differences in patient classification, we grouped patient assessments into 11 major categories—the 5 major rehabilitation categories (ultra high, very high, high, medium, and low), 3 categories for patients requiring extensive or special care or who are clinically complex, and 3 categories for patients requiring custodial care, based on the RUG reported on the initial assessment. To examine the differences in the provision of therapy services, we aggregated the reported physical, occupational, and speech therapy minutes for each assessment. We calculated the number of initial assessments that had used estimated minutes to qualify patients into a rehabilitation category by counting the number of first assessments that reported actual therapy minutes below the minimum number of minutes required in the three rehabilitation categories (high, medium, and low). To determine the extent to which patients received the estimated therapies, we calculated, for the patients who had a second assessment, the percent who had received less than the minimum number of therapy minutes required for the RUG reported on the initial assessment. We also interviewed CMS staff responsible for SNF policy and we reviewed regulations, literature, and other documents relating to SNF PPS and MDS. Appendix II: Therapy Minutes, Activities of Daily Living, and Medicare Payment Rates to SNFs Patients are classified into the custodial categories according to their need for nursing services and assistance with ADLs. These patients typically do not meet the criteria for Medicare coverage because they generally do not require skilled nursing care.
|
In 1998, the Health Care Financing Administration implemented a prospective payment system (PPS) for skilled nursing facility (SNF) services provided to Medicare beneficiaries. PPS is intended to control the growth in Medicare spending for skilled nursing and rehabilitative services that SNFs provide. Two years after the implementation of PPS, the mix of patients across the categories of payment groups has shifted, as determined by the patients' initial minimum data set assessments. Although the overall share of patients classified into rehabilitation payment group categories based on their initial assessments remained about the same, more patients were classified into the high and medium rehabilitation payment group categories, and fewer were initially classified into the most intensive (highest paying) and least intensive (lowest paying) rehabilitation payment group categories. Two years after PPS was implemented the majority of patients in rehabilitation payment groups received less therapy than was provided in 1999. This was true even for patients within the same rehabilitation payment group categories. Across all rehabilitation payment group categories, fewer patients received the highest amounts of therapy associated with each payment group.
|
VA pays monthly disability compensation benefits to veterans with service-connected disabilities (injuries or diseases incurred or aggravated while on active military duty) according to the severity of the disability. VA also pays compensation to some spouses, children, and parents of deceased veterans and servicemembers. VA’s pension program pays monthly benefits based on financial need to certain wartime veterans or their survivors. When a veteran submits a claim to any of the Veterans Benefits Administration’s (VBA) 57 regional offices, a veterans service representative is responsible for obtaining the relevant evidence to evaluate the claim. Such evidence includes veterans’ military service records, medical examinations, and treatment records from VA medical facilities and private medical service providers. Once a claim has all the necessary evidence, a rating specialist evaluates the claim and determines whether the claimant is eligible for benefits. If the veteran is eligible for disability compensation, the rating specialist assigns a percentage rating based on degree of disability. A veteran who disagrees with the regional office’s decision can appeal to VA’s Board of Veterans’ Appeals, and then to U.S. federal courts. If the Board finds that a case needs additional work, such as obtaining additional evidence or contains procedural errors, it is sent back to the Veterans Benefits Administration, which is responsible for initial decisions on disability claims. In November 2003, the Congress established the Veterans’ Disability Benefits Commission to study the appropriateness of VA disability benefits, including disability criteria and benefit levels. The commission is scheduled to report the results of its study to the Congress in October 2007. Several factors are continuing to create challenges for VA’s claims processing, despite its steps to improve performance. While VA made progress in fiscal years 2002 and 2003 reducing the size and age of its pending claims inventory, it has lost ground since then. This is due in part to increased filing of claims, including those filed by veterans of the Iraq and Afghanistan conflicts. Other factors include increases in claims complexity, the effects of recent laws and court decisions, and challenges in acquiring needed evidence in a timely manner. VA’s steps to improve performance include requesting funding for additional staff and undertaking initiatives to reduce appeal remands. VA’s inventory of pending claims and their average time pending has increased significantly in the last 3 years, in part because of an increase in the number of claims. The number of pending claims increased by almost one-half from the end of fiscal year 2003 to the end of fiscal year 2006, from about 254,000 to about 378,000. During the same period, the number of claims pending longer than 6 months increased by more than three- fourths, from about 47,000 to about 83,000 (see fig. 1). Similarly, as shown in figure 2, VA reduced the average age of its pending claims from 182 days at the end of fiscal year 2001 to 111 days at the end of fiscal year 2003. However, by the end of fiscal year 2006, average days pending had increased to 127 days. Meanwhile, the time required to resolve appeals remains too long. The average time to resolve an appeal rose from 529 days in fiscal year 2004 to 657 days in fiscal year 2006. The increase in VA’s inventory of pending claims, and their average time pending is due in part to an increase in claims receipts. Rating-related claims, including those filed by veterans of the Iraq and Afghanistan conflicts, increased steadily from about 579,000 in fiscal year 2000 to about 806,000 in fiscal year 2006, an increase of about 39 percent. While VA projects relatively flat claim receipts in fiscal years 2007 and 2008, it cautions that ongoing hostilities in Iraq and Afghanistan, and the Global War on Terrorism in general, may increase the workload beyond current levels. VA also attributes increased claims to its efforts to increase outreach to veterans and servicemembers. For example, VA reports that in fiscal year 2006, it provided benefits briefings to about 393,000 separating servicemembers, up from about 210,000 in fiscal year 2003, leading to the filing of more original compensation claims. VA has also noted that claims have increased in part because older veterans are filing disability claims for the first time. Moreover, according to VA, the complexity of claims is also increasing. For example, some veterans are citing more disabilities in their claims than in the past. Because each disability needs to be evaluated separately, these claims can take longer to complete. Additionally, VA notes that it is receiving claims for new and complex disabilities related to combat and deployments overseas, including those based on environmental and infectious disease risks and traumatic brain injuries. Further, VA is receiving increasing numbers of claims for compensation for post- traumatic stress disorder, which are generally harder to evaluate, in part because of the evidentiary requirements to substantiate the event causing the stress disorder. Since 1999, several court decisions and laws related to VA’s responsibilities to assist veterans in developing their benefit claims have significantly affected VA’s ability to process claims in a timely manner. VA attributes some of the increase in the number of claims pending and the average days pending to a September 2003 court decision that required over 62,000 claims to be deferred, many for 90 days or longer. Also, VA notes that legislation and VA regulations have expanded benefit entitlement and added to the volume of claims. For example, in recent years, laws and regulations have created new presumptions of service- connected disabilities for many Vietnam veterans and former prisoners of war. Also, VA expects additional claims receipts based on the enactment of legislation allowing certain military retirees to receive both military retirement pay and VA disability compensation. Additionally, claims-processing timeliness can be hampered if VA cannot obtain the evidence it needs in a timely manner. For example, to obtain information needed to fully develop some post-traumatic stress disorder claims, VBA must obtain records from the U.S. Army and Joint Services Records Research Center (JSRRC), whose average response time to VBA regional office requests is about 1 year. This can significantly increase the time it takes to decide a claim. In December 2006, we recommended that VBA assess whether it could systematically utilize an electronic library of historical military records rather than submitting all research requests to JSRRC. VBA agreed to determine the feasibility of regional offices using an alternative resource prior to sending some requests to JSRRC. VA has recently taken several steps to improve claims-processing. In its fiscal year 2008 budget justification, VA identified an increase in claims- processing staff as essential to reducing the pending claims inventory and improving timeliness. According to VA, with a workforce that is sufficiently large and correctly balanced, it can successfully meet the veterans’ needs while ensuring good stewardship of taxpayer funds. The fiscal year 2008 request would fund 8,320 full-time equivalent employees working on compensation and pension, which would represent an increase of about 6 percent over fiscal year 2006. In addition, the budget justification cites near-term initiatives to increase the number of claims completed, such as using retired VA employees to provide training and the increased use of overtime. Even as staffing levels increase, however, VA acknowledges that it still must take other actions to improve productivity. VA’s budget justification provides information on actual and planned productivity, in terms of claims decided per full-time equivalent employee. While VA expects a temporary decline in productivity as new staff are trained and become more experienced, it expects productivity to increase in the longer term. Also, VA has identified additional initiatives to help improve productivity. For example, VA plans to pilot paperless Benefits Delivery at Discharge, where servicemembers’ disability claim applications, service medical records, and other evidence would be captured electronically prior to discharge. VA expects that this new process will reduce the time needed to obtain the evidence needed to decide claims. To resolve appeals faster, VA has been working to reduce the number of appeals sent back by the Board of Veterans’ Appeals for further work such as obtaining additional evidence and correcting procedural errors. To do so, VA has established joint training and information sharing between field staff and the Board. VA reports that it has reduced the percentage of decisions remanded from about 57 percent in fiscal year 2004 to about 32 percent in fiscal year 2006, and expects its efforts to lead to further reductions. Also, VA reports that it has improved the productivity of the Board’s judges from an average of 604 appeals decided in fiscal year 2003 to 698 in fiscal year 2006. The Board attributes this improvement to training and mentoring programs and expects productivity to improve to 752 decisions in fiscal year 2008. While VA is taking actions to address its claims-processing challenges, there are opportunities for more fundamental reform that could dramatically improve decision making and processing. These include reexamining program design, as well as the structure and division of labor among field offices. After more than a decade of research, we have determined that federal disability programs are in urgent need of attention and transformation, and we placed modernizing federal disability programs on our high-risk list in January 2003. Specifically, our research showed that the disability programs administered by VA and the Social Security Administration (SSA) lagged behind the scientific advances and economic and social changes that have redefined the relationship between impairments and work. For example, advances in medicine and technology have reduced the severity of some medical conditions and have allowed individuals to live with greater independence and function in work settings. Moreover, the nature of work has changed in recent decades as the national economy has moved away from manufacturing-based jobs to service- and knowledge-based employment. Yet VA’s and SSA’s disability programs remain mired in concepts from the past, particularly the concept that impairment equates to an inability to work. Because of this, and because of continuing program administration problems, such as lengthy claims- processing times, we found that these programs are poorly positioned to provide meaningful and timely support for Americans with disabilities. In August 2002, we recommended that VA use its annual performance plan to delineate strategies for and progress in periodically updating labor market data used in its disability determination process. We also recommended that VA study and report to the Congress on the effects that a comprehensive consideration of medical treatment and assistive technologies would have on its disability programs’ eligibility criteria and benefits package. This study would include estimates of the effects on the size, cost, and management of VA’s disability programs and other relevant VA programs and would identify any legislative actions needed to initiate and fund such changes. In addition to program design, VA’s regional office claims processing structure may be disadvantageous to efficient operations. VBA and others who have studied claims processing have suggested that consolidating claims processing into fewer regional offices could help improve claims- processing efficiency and save overhead costs. We noted in December 2005 that VA had made piecemeal changes to its claims-processing field structure. VA consolidated decisionmaking on Benefits Delivery at Discharge claims, which are generally original claims for disability compensation, at the Salt Lake City and Winston-Salem regional offices. VA also consolidated in-service dependency and indemnity compensation claims at the Philadelphia regional office. These claims are filed by survivors of servicemembers who die while in military service. VA consolidated these claims as part of its efforts to provide expedited service to these survivors, including servicemembers who died in Operations Iraqi Freedom and Enduring Freedom. However, VA has not changed its basic field structure for processing compensation and pension claims at 57 regional offices, which experience large performance variations. Unless more comprehensive and strategic changes are made to its field structure, VBA is likely to miss opportunities to substantially improve productivity, especially in the face of future workload increases. We have recommended that VA undertake a comprehensive review of its field structure for processing disability compensation and pension claims. While reexamining claims-processing challenges may be daunting, there are mechanisms for undertaking such an effort, including the congressionally chartered commission currently studying veterans’ disability benefits. In November 2003, the Congress established the Veterans’ Disability Benefits Commission to study the appropriateness of VA disability benefits, including disability criteria and benefit levels. The commission is to examine and provide recommendations on (1) the appropriateness of the benefits, (2) the appropriateness of the benefit amounts, and (3) the appropriate standard or standards for determining whether a disability or death of a veteran should be compensated. The commission held its first public hearing in May 2005, and in October 2005, the commission established 31 research questions for study. These questions address such issues as how well disability benefits meet the congressional intent of replacing average impairment in earnings capacity, and how VA’s claims-processing operation compares to other disability programs, including the location and number of processing centers. These issues and others have been raised by previous studies of VBA’s disability claims process. The commission is scheduled to report to the Congress by October 1, 2007. Mr. Chairman, this concludes my remarks. I would be happy to answer any questions that you or other members of the subcommittee may have. For further information, please contact Daniel Bertoni at (202) 512-7215. Also contributing to this statement were Shelia Drake, Martin Scire, Greg Whitney, and Charles Willson. Veterans’ Disability Benefits: Long-Standing Claims Processing Problems Persist. GAO-07-512T. Washington, D.C.: March 7, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 31, 2007. Veterans’ Disability Benefits: VA Can Improve Its Procedures for Obtaining Military Service Records. GAO-07-98. Washington, D.C.: December 12, 2006. Veterans’ Benefits: Further Changes in VBA’s Field Office Structure Could Help Improve Disability Claims Processing. GAO-06-149. Washington, D.C.: December 9, 2005. Veterans’ Disability Benefits: Claims Processing Challenges and Opportunities for Improvements. GAO-06-283T. Washington, D.C.: December 7, 2005. Veterans’ Disability Benefits: Improved Transparency Needed to Facilitate Oversight of VBA’s Compensation and Pension Staffing Levels. GAO-06- 225T. Washington, D.C.: November 3, 2005. VA Benefits: Other Programs May Provide Lessons for Improving Individual Unemployability Assessments. GAO-06-207T. Washington, D.C.: October 27, 2005. Veterans’ Disability Benefits: Claims Processing Problems Persist and Major Performance Improvements May Be Difficult. GAO-05-749T. Washington, DC.: May 26, 2005. VA Disability Benefits: Board of Veterans’ Appeals Has Made Improvements in Quality Assurance, but Challenges Remain for VA in Assuring Consistency. GAO-05-655T. Washington, D.C.: May 5, 2005. Veterans Benefits: VA Needs Plan for Assessing Consistency of Decisions. GAO-05-99. Washington, D.C.: November 19, 2004. Veterans’ Benefits: More Transparency Needed to Improve Oversight of VBA’s Compensation and Pension Staffing Levels. GAO-05-47. Washington, D.C.: November 15, 2004. Veterans’ Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. Veterans Benefits Administration: Better Collection and Analysis of Attrition Data Needed to Enhance Workforce Planning. GAO-03-491. Washington, D.C.: April 28, 2003. Veterans’ Benefits: Claims Processing Timeliness Performance Measures Could Be Improved. GAO-03-282. Washington, D.C.: December 19, 2002. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. Veterans’ Benefits: VBA’s Efforts to Implement the Veterans Claims Assistance Act Need Further Monitoring. GAO-02-412. Washington, D.C.: July 1, 2002. Veterans’ Benefits: Despite Recent Improvements, Meeting Claims Processing Goals Will Be Challenging. GAO-02-645T. Washington, D.C.: April 26, 2002. Veterans Benefits Administration: Problems and Challenges Facing Disability Claims Processing. GAO/T-HEHS/AIMD-00-146. Washington, D.C.: May 18, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Subcommittee on Disability Assistance and Memorial Affairs, House Veterans' Affairs Committee, asked GAO to discuss its recent work related to the Department of Veterans Affairs' (VA) disability claims and appeals processing. GAO has reported and testified on this subject on numerous occasions. GAO's work has addressed VA's efforts to improve the timeliness of decisions on claims and appeals and VA's efforts to reduce backlogs. VA continues to face challenges in improving service delivery to veterans, specifically speeding up the process of adjudication and appeal, and reducing the existing backlog of claims. For example, as of the end of fiscal year 2006, rating-related compensation claims were pending an average of 127 days, 16 days more than at the end of fiscal year 2003. During the same period, the inventory of rating-related claims grew by almost half, in part because of increased filing of claims, including those filed by veterans of the Iraq and Afghanistan conflicts. Meanwhile, appeals resolution remains a lengthy process, taking an average of 657 days in fiscal year 2006. However, several factors may limit VA's ability to make and sustain significant improvements in its claims-processing performance, including the potential impacts of laws and court decisions, continued increases in the number and complexity of claims being filed, and difficulties in obtaining the evidence needed to decide claims in a timely manner, such as military service records. VA is taking steps to address these problems. For example, the President's fiscal year 2008 budget requests an increase of over 450 full-time equivalent employees to process compensation claims. VA is also working to improve appeals timeliness by reducing appeals remanded for further work. While VA is taking actions to address its claims-processing challenges, opportunities for significant performance improvement may lie in more fundamental reform of VA's disability compensation program. This could include reexamining program design such as updating the disability criteria to reflect the current state of science, medicine, technology, and labor market conditions. It could also include examining the structure and division of labor among field offices.
|
In general, the Coast Guard’s COP can be described as an information display that provides the position and additional information on vessel and aircraft contacts (called tracks) to the Coast Guard and other decision makers. The Coast Guard’s concept for the COP includes a complex interplay of data, assets, technology, and multiple organizations at multiple security levels helping to populate and share information within the COP. As shown in figure 1, entities outside of the Coast Guard are also integral parts of the COP. According to the Coast Guard, the COP comprises four elements: (1) track data feeds, (2) information data sources, (3) command and control systems, and (4) COP management procedures. Track data feeds. The primary information included in the Coast Guard’s COP is vessel and aircraft position information—or tracks— and descriptive information about the vessels, their cargo, and crew. Track information may be obtained from a variety of sources depending on the type of track. For example, the COP includes automatic identification system (AIS) tracks, as well as fishing vessel tracks from the National Oceanic and Atmospheric Administration’s Vessel Monitoring System. The COP also includes track information or position reports of Coast Guard and port partner vessels. The Coast Guard receives aircraft location information from Customs and Border Protection’s Air and Marine Operations Center. In addition to vessel-related information, the COP also includes information and data that can be geographically referenced, such as the boundary lines of Coast Guard units’ areas of responsibility or U.S. territorial waters, and weather information, among other things. See figure 2 for an example of vessel tracks on a COP display. Information data sources. The information data sources provide supplementary information on the vessel tracks to help COP users and operational commanders determine why a track might be important. The COP includes data from multiple information sources that originate from the Coast Guard as well as from other government agencies and civilian sources. Internal sources include intelligence inputs and Coast Guard databases such as the Marine Information for Safety and Law Enforcement (MISLE) and the Ship Arrival Notification System, among others. External information sources include the Department of Defense, Joint Interagency Task Force South, and the National Oceanic and Atmospheric Administration. All of these information sources are fused with, or overlaid on, the track information to provide more complete information to COP users about the nature of the identified tracks. Command and control systems. These are the systems used to collect, fuse, disseminate, and store information for the COP. Since the COP became operational in 2003, the Coast Guard has provided COP users with various systems that have allowed them to view, manipulate and enhance their use of the COP. Among these systems have been the Global Command and Control System (GCCS), Command and Control Personal Computer (C2PC), and Hawkeye. See appendix I for additional information on the various systems and applications that COP users identified as providing access to COP information. In addition to the technology needed to view the COP, the Coast Guard has also developed technology to further enhance the information within the COP and its use to improve mission effectiveness. This has occurred in part through its former Deepwater Program C4ISR system improvements. This technology acquisition was intended to create an interoperable network of sensors, computer systems, and hardware to improve MDA. Specifically, C4ISR was designed to allow the Coast Guard’s new vessels and aircraft, acquired under the Deepwater program, to both add information to the COP using their own sensors as well as view information contained within the COP, thereby allowing these assets to become both producers and consumers of COP information. In July 2011, we reported that the Coast Guard was developing C4ISR infrastructure that it expected to collect, correlate, and present information into a single COP to facilitate mission execution. Similarly, as we reported in February 2012, the WatchKeeper software that was developed as part of the DHS Interagency Operations Center program was intended to increase the information available to the COP by having port partners add information from their data bases while increasing the port partners’ access to Coast Guard information. Coast Guard Sectors were expected to give their port partners access to the software, which was to act as a two way conduit for information sharing. At that time we reported that the Coast Guard had been installing the software in all 35 of its Sector locations. COP management procedures. These procedures address the development and the use of the COP. This would include, for example, the Concept of Operations document, which identifies the basic components, use, and exchange of information that are included in the COP. It would also include the requirements document, which identifies the essential capabilities and associated requirements needed to make the COP function. It also includes other documents such as standard operating procedures on how the Coast Guard uses the COP, agreements with others using the Coast Guard COP on how information is to be shared or exchanged, and the rules for how data are correlated and also how vessels are flagged as threats or friends. The Coast Guard relies on GIS, which is an integrated collection of computer software and data used to view and manage information about geographic places, analyze spatial relationships, and model spatial processes, in order to share information related to the people, vessels, and facilities in a mapped display. GIS allows Coast Guard personnel the ability to easily see the different aspects of ongoing events on a map, and, if necessary, deploy Coast Guard assets to address the issue. As a result, Coast Guard-wide GIS is an important capability for enabling Coast Guard personnel to view and analyze COP information. The Coast Guard’s first agency-wide GIS vehicle was to add a GIS viewer to the existing MISLE database. This later evolved into what the Coast Guard refers to as Enterprise GIS, or EGIS. These GIS-based applications with their incorporated viewers are not limited to viewing the COP, but can also be used to receive, correlate, and analyze a variety of information from multiple sources to provide situational awareness. For example, CG1V is being developed to allow Coast Guard personnel to have a single viewer to interface with COP information and other Coast Guard GIS databases, as seen in figure 3. In 2004, the Coast Guard implemented the SDLC process for non-major IT acquisitions—those with less than $300 million dollars in life cycle costs—to help ensure IT projects are managed effectively and meet user needs. The Coast Guard’s SDLC process is documented in the U.S. Coast Guard SDLC Practice Manual. According to the Practice Manual, the SDLC process provides consistent framework for IT project management and risk evaluation to help ensure systems are developed and maintained on time and within budget, and that they deliver the capabilities necessary to meet user requirements. In addition, the SDLC process provides guidance describing the actions necessary, including event sequences, to ensure compliance with Coast Guard-wide polices, the Office of Management and Budget Circular No. A-130, Management of Federal Information Resources, and DHS Acquisition Directive 102- 01 (AD 102-01). The SDLC has seven major phases, beginning with the Conceptual Planning phase and ending with the Disposition phase, as seen in figure 4. Figure 4 summarizes the activities that must be completed within each phase. According to the SDLC manual, to proceed from one SDLC phase to the subsequent phase, activities and products from each phase must be completed, reviewed, and approved by the designated authority. Each project is managed by an Integrated Project Team that includes representatives from the CIO’s office and representatives of the Coast Guard headquarters directorate responsible for the mission. The project team works with customers, users, and stakeholders to deliver successful and supportable IT systems. The CIO is responsible for designating projects into the SDLC and the Asset Manager within the CIO’s office is tasked with guiding, overseeing, and monitoring the execution of SDLC for the assigned system to ensure alignment and compliance with the SDLC process. Another role under the SDLC is the sponsor, who defines and validates functional requirements and accepts capability needed to support Coast Guard mission or business performance. We have previously reported on challenges the Coast Guard has experienced in meeting goals of COP-related systems, such as C4ISR and WatchKeeper. Some of the shortcomings with these technology systems have included the inability to share information as intended. In July 2011, we reported that the Coast Guard had not met its goal of building a single C4ISR system—intended to enable the sharing of COP and other data among its offshore vessels and aircraft. Specifically, we noted that the Coast Guard repeatedly changed its strategy for achieving the goal of its $2.5 billion C4ISR project, which was to build a single fully interoperable command, control, intelligence, surveillance, and reconnaissance system across the Coast Guard’s Deepwater vessels and aircraft. We found that not all aircraft and vessels were operating the same C4ISR system, or even at the same classification level and hence could not directly exchange data with each other. For example, sharing information gathered by an aircraft operating with a classified system was difficult during the Deepwater Horizon oil spill incident. In addition, we reported that the Coast Guard may shift away from a full data-sharing capability, and instead, use a system where shore-based command centers could be a conduit between assets while also entering data from assets into the COP. This could increase the time it takes for COP information gathered by a vessel operating with a classified system to be shared with an aircraft operating with an unclassified system. Because aircraft and vessels are important contributors to and users of COP information, a limited capability to quickly and fully share COP data may affect their mission effectiveness. We concluded that given these uncertainties, the Coast Guard did not have a clear vision of the C4ISR required to meet its missions. We also reported in July 2011 that the Coast Guard was managing the C4ISR program without key acquisition documents. At that time, the Coast Guard lacked the following key documents: an acquisition program baseline that reflected the planned program, a credible life-cycle cost estimate, and an operational requirements document for the entire C4ISR acquisition project. According to Coast Guard information technology officials, the abundance of software baselines could increase the overall instability of the C4ISR system and complexity of the data sharing between assets. We recommended, and the Coast Guard concurred, that it should determine whether the system-of-systems concept for C4ISR is still the planned vision for the program, and if not, ensure that the new vision is comprehensively detailed in the project documentation. In response to our recommendation, the Coast Guard reported in 2012 that it was still supporting the system-of-systems approach and was developing needed documentation. The agency also reported that it planned to install a communication system on air and vessel assets to provide for interoperability and direct communication. One mechanism expected to increase access to COP information was the DHS Interagency Operations Center program, which was delegated to the Coast Guard for development. This program began providing COP information to Coast Guard agency partners in 2010. Using WatchKeeper software, IOCs were originally designed to gather data from sensors and port partner sources to provide situational awareness to Coast Guard sector personnel and to Coast Guard partners in state and local law enforcement and port operations, among others. WatchKeeper was designed to provide Coast Guard personnel and port partners with access to the same unclassified GIS data, thereby improving collaboration between them. Making this information available to port partners has also allowed the Coast Guard to leverage the capabilities of its partners in responding to cases. For example, in responding to a distress call, if both the Coast Guard unit and its local port partners know the location of all possible response vessels, they can allocate resources and develop search patterns that make the best use of each responding vessel. In February 2012, we reported that the Coast Guard had increased access to its WatchKeeper software by allowing access to the system for Coast Guard port partners; however, the Coast Guard had limited success in improving information sharing between the Coast Guard and local port partners. We found that the Coast Guard did not follow established guidance during the development of WatchKeeper—a major component of the $74 million Interagency Operations Center acquisition project—by, in part, failing to determine the needs of its users, define acquisition requirements, or determine cost and schedule information. Prior to the initial deployment of WatchKeeper, the Coast Guard made only limited efforts to determine port partner needs for the system. We found that Coast Guard officials had some high level discussions, primarily with other DHS partners. Port partner involvement in the development of WatchKeeper requirements was primarily limited to Customs and Border Protection because WatchKeeper grew out of a system designed for screening commercial vessel arrivals—a Customs and Border Protection mission. However, according to the Interagency Operations Process Report: Mapping Process to Requirements for Interagency Operations Centers, the Coast Guard identified many port partners as critical to IOCs, including other federal agencies (e.g., the Federal Bureau of Investigation) and state and local agencies. We also determined that because few port partners’ needs were met with WatchKeeper, use of the system by port partners was limited. Specifically, of the 233 port partners who had access to WatchKeeper for any part of September 2011 (the most recent month for which data were available at the time of our report), about 18 percent had ever logged onto the system and about 3 percent had logged on more than five times. Additionally, we reported that without implementing a documented process to obtain and incorporate port partner feedback into the development of future WatchKeeper requirements, the Coast Guard was at risk of deploying a system that lacked needed capabilities, and that would continue to limit the ability of port partners to share information and coordinate in the maritime environment. We concluded, in part, that the weak management of the $74 million IOC acquisition project increased the program’s exposure to risk. In particular, fundamental requirements- development and management practices had not been employed; costs were unclear; and the project’s schedule, which was to guide program execution and promote accountability, had not been reliably derived. Moreover, we reported that with stronger program management, the Coast Guard could reduce the risk that it would have a system that did not meet Coast Guard and port-partner user needs and expectations. As a result, we recommended, and the Coast Guard concurred, that it should collect data to determine the extent to which (1) sectors are providing port partners with WatchKeeper access and (2) port partners are using WatchKeeper; then develop, document, and implement a process to obtain and incorporate port-partner input into the development of future WatchKeeper requirements, and define, document, and prioritize WatchKeeper requirements. As of April 2013, we have not received any reports of progress on these recommendations from the Coast Guard. The Coast Guard has made some progress in increasing the amount and type of information included in the COP, and has increased the number of users with access to that information. However, it has faced challenges in implementing some COP-related systems. Since the COP became operational in 2003, the Coast Guard has made progress in adding useful data sources and in increasing the number of users with access to the COP. In general, the COP has added multiple sources and types of vessel-tracking information that enhance COP users’ knowledge of the maritime domain. While vessel tracking information had been available previously to Coast Guard field units located in ports with a Vessel Tracking Service, adding it to the COP provided a broader base of situational awareness for Coast Guard operational commanders. For example, before AIS vessel-tracking information was added to the COP, only Coast Guard units specifically responsible for vessel-tracking, were able to easily track large commercial vessels’ positions, speeds, courses, and destinations. According to Coast Guard personnel, after AIS data were added to the COP in 2003, any Coast Guard unit could access such information to improve strategic and tactical decision making. In 2006, the ability to track the location of Coast Guard assets, including small boats and cutters, was also added to the COP. This capability—also known as blue force tracking—allows COP users to locate Coast Guard vessels in real time and establish which vessels are in the best position to respond to mission needs. Similarly, blue force tracking allows the Coast Guard to differentiate its own vessels from commercial or unfriendly vessels. Figure 5 shows examples of data sources added to the COP since 2003. Another enhancement to the information available in the COP was provided through the updating of certain equipment on Coast Guard assets to enable them to collect and transmit data. Specifically, the Coast Guard made some data collection and sharing improvements, including the installation of commercial satellite communications equipment and AIS receivers, onboard its older cutters. This added capability made the COP information more robust by allowing Coast Guard vessels at sea to receive, through AIS receivers, position reports from large commercial vessels and then transmit this information to land units where it would be entered into the COP. This equipment upgrade on older Coast Guard cutters added information into the COP that is generally not available through other means. According to Coast Guard officials, in addition to adding information to the COP, the Coast Guard has also made the information contained in the COP available on more computers and on more systems, which, in turn, has increased the number of users with access to the COP both within and outside the agency. One of the key steps toward increasing the number of users with COP access occurred in 2004 with the implementation of C2PC, which made both the classified and unclassified COP available to additional Coast Guard personnel. According to Coast Guard officials, the advent of C2PC allowed access to the COP from any Coast Guard computer connected to the Coast Guard data network. Prior to C2PC, Coast Guard personnel had access to the COP through Coast Guard GCCS workstations. The Coast Guard has experienced multiple challenges in meeting its goals for multiple COP-related systems. Some of these challenges were identified by users and some were identified by Coast Guard IT management. The challenges related to such things as poor usability, degradation of computer performance, and the inability to share information as intended, and they have affected the Coast Guard’s deployment of recent technology acquisitions. Coast Guard personnel we interviewed who use EGIS stated they experienced numerous challenges with EGIS—an important component with its associated viewer for accessing COP information—after it was implemented in 2009. Our site visits to area, district, and sector command centers in six Coast Guard field locations, and discussions with headquarters personnel identified numerous examples of user concerns about EGIS. Specifically, the Coast Guard EGIS users we interviewed stated that EGIS was slow, did not always display accurate and timely information, or degraded the performance of their computer workstations—making EGIS’s performance generally unsatisfactory for them. For example, personnel from one district we visited reported losing critical time when attempting to determine a boater’s position on a map display because of EGIS’s slow performance. Similarly, personnel at three of the five districts we visited described how EGIS sometimes displayed inaccurate or delayed vessel location information, including, for example, displaying a vessel track indicating a 25-foot Coast Guard boat was located off the coast of Greenland—a location where no such vessel had ever been. Personnel we met with in two districts did not use EGIS at all to display COP information because doing so caused other applications to crash. The problems that we witnessed firsthand or that were described to us by Coast Guard personnel were validated by data from the Coast Guard’s EGIS-related help desk tickets which summarize problems with EGIS, among other things, for Coast Guard IT staff. For example, our examination of the fiscal year 2011 help desk tickets indicated that users reported several types of problems with EGIS including problems related to performance, loss of capabilities, and data error notification, among other issues. In one District, limitations users encountered with EGIS caused that District to request permission from Coast Guard headquarters to use an alternative system because EGIS’s poor performance affected the ability of district personnel to monitor blue force tracking. However, personnel responsible for managing EGIS development at Coast Guard headquarters told us that EGIS was never intended to be able to display blue force tracking—which is likely why users were experiencing difficulty using it for this purpose. They also recognized that the lack of user training on EGIS’s capabilities likely contributed to this misunderstanding about its capabilities. Coast Guard IT officials told us they experienced challenges largely related to insufficient computational power on some Coast Guard work stations, a lack of training for users and system installers, and inadequate testing of EGIS software before installation. According to Coast Guard IT officials, Coast Guard computers are replaced on a regular schedule, but not all at once and EGIS’s viewer places a high demand on the graphics capabilities of computers. They added that this demand was beyond the capability of the older Coast Guard computers used in some locations. Moreover, Coast Guard IT management made EGIS available to all potential users without performing the tests needed to determine if capability challenges would ensue. When EGIS was installed on these older computers performance suffered. In regard to training, Coast Guard officials told us that they had developed on-line internal training for EGIS and classroom training was also available from the software supplier. Coast Guard IT officials said, however, that they did not inform users that this training was available. This left the users with learning how to use EGIS on the job. Similarly, the installers of EGIS software were not trained properly and many cases of incomplete installation were later discovered. These incomplete installations significantly degraded the capabilities of EGIS. Finally, the Coast Guard did not test the demands of EGIS on Coast Guard systems in real world conditions, according to Coast Guard officials. Only later, after users commented on their problems using EGIS, did the Coast Guard perform the tests that demonstrated the limitations of the Coast Guard network in handling EGIS. According to Coast Guard officials, some of these challenges may have been avoided if they had followed the SDLC process for IT development. Specifically, they said that if they had completed three required planning documents—an implementation plan, a training plan, and a Test and Evaluation Master Plan, and conducted the associated activities outlined by these types of planning documents—the agency could have avoided these management challenges that it experienced after EGIS’s deployment. If these problems had been averted, users may have had greater satisfaction and the system may have been better utilized for Coast Guard mission needs. Poor communication by, and among, Coast Guard IT officials led to additional management challenges during efforts to implement a simplified EGIS technology called EGIS Silverlight. According to Coast Guard officials, the Coast Guard implemented EGIS Silverlight to give users access to EGIS data without the analysis tools that had been tied to technical challenges with the existing EGIS software. Coast Guard CIO office personnel stated that EGIS Silverlight was available to users in 2010; however, none of the Coast Guard personnel we spoke with at the field units we visited mentioned awareness of or use of this alternative EGIS option when asked about what systems they used to access the COP. According to Coast Guard CIO office personnel, it was the responsibility of the sponsor’s office to notify users about the availability of EGIS Silverlight. However, personnel from the sponsor’s office stated that they were unaware that EGIS Silverlight had been deployed and thus had not taken steps to notify field personnel of this new application that could have helped to address EGIS performance problems. These Coast Guard officials were unable to explain how this communication breakdown had occurred. Although the SDLC process has been in place since 2004, the Coast Guard has not adhered to this guidance for the development of more recent COP-related technology—Coast Guard One View, or CG1V. The Coast Guard reported that it began development of a new GIS viewer—CG1V—in April 2010 to provide users with a single interface for viewing GIS information, including the COP, and to align the Coast Guard’s viewer with DHS’s new GIS viewer. However, the Coast Guard diverged from the SDLC process at the outset when in April 2012 the Coast Guard CIO placed CG1V into the second, rather than first, phase of the SDLC through a designation letter—the action that places an IT acquisition into the Coast Guard’s technology development process. The designation letter states that CG1V shall enter the SDLC in its second phase, planning and requirements, rather than its first phase, conceptual planning—without any explanation as to why the system was being placed into the second rather than first phase. As a result, the Coast Guard began developing requirements for CG1V before it had defined how it planned to manage the development of CG1V or had defined the deliverables for each phase of the project. “Legacy Systems” are systems that have already been developed or reached a stage of maturity but have not completed the necessary products for the SDLC or were developed prior to implementation of the SDLC. CG1V’s designation into the planning and requirements phase did not, from the outset, follow the process outlined in the SDLC guidance. Although officials stated in October 2012 that efforts were underway to complete phase one documents for CG1V, as of February 2013, almost a year after CG1V’s April 2012 designation into the second phase of the SDLC, the Coast Guard has not completed two of the five key documents needed to exit the first phase. For example, the business case—an SDLC-required document that presents the problem to be solved, the solution being proposed, and the expected value of the project—has not been completed by the Coast Guard. This document is also used by management to determine if staff or other resources are to be devoted to defining and evaluating alternative ways to respond to the identified need or opportunity. In addition to not having the business case completed, the acquisition strategy had also not been completed as of February 2013. The acquisition strategy lays out the funding source for the project and the anticipated costs of completing the planning and requirements phase. It also includes a review of the business case to ensure the efficient utilization of Coast Guard resources. As we have previously reported, when managing the C4ISR program, the Coast Guard had inadequate or incomplete acquisition documentation. Specifically, we reported in July 2011, that the Coast Guard’s C4ISR project lacked the technical planning documents necessary to both articulate the vision of a common C4ISR baseline—a key goal of the C4ISR project—and to guide the development of the C4ISR system in such a way that the system on each asset remains true to the vision. With respect to our ongoing review of CG1V, by not completing the business case and acquisition strategy, the Coast Guard may have prematurely selected CG1V as a solution without reviewing other viable alternatives to meets its vision, and may also have dedicated resources to CG1V without knowing the project costs. In addition to not completing two key phase one documents, the Coast Guard has also developed other SDLC documents for CG1V out of sequence. Specifically, the SDLC manual states that the tailoring plan is to be developed in conceptual planning before other documents, such as the Functional Requirements Document, are created. The tailoring plan is to provide a clear and concise listing of SDLC process requirements throughout the entire system lifecycle, and facilitates the documentation of calculated deviations from standard SDLC activities, products, roles, and responsibilities from the outset of the project. Though the SDLC manual clearly states that the tailoring plan is a key first step in the SDLC, CG1V’s tailoring plan was not approved until February 2013, almost a year after CG1V was designated into the SDLC. Coast Guard officials stated that they were completing some documents retroactively because projects cannot exit any phase of the SDLC process without completing the documents required in each of the preceding phases. For example, to exit the conceptual planning phase and enter into the planning and requirements phase, projects must complete all of the exit criteria—which includes several key documents—from the conceptual planning phase. Similarly, the Functional Requirements Document—a phase two document—was also drafted out of sequence in March 2012, but in this case it was drafted early—a full month before CG1V was even designated into the SDLC. By completing these documents out of sequence, the Coast Guard did not follow the disciplined activities and product outputs of the SDLC to ensure that appropriate information is gathered and monitored to support investment decisions. The Enterprise Architecture Board provides guidance through reviews of Coast Guard information technology investments. The Enterprise Architecture Board is required to review all Coast Guard C4&IT acquisitions to make sure the project is aligned with its enterprise architecture. enterprise architecture and whether an equivalent capability already existed in the enterprise architecture. Although its review was conducted later in the process than might be expected under the SDLC, according to Coast Guard officials, the Enterprise Architecture Board confirmed that CG1V was in alignment with the Coast Guard’s enterprise architecture. However, the Enterprise Architecture Board also placed conditions on CG1V’s development—including a requirement that CG1V program officials continue working to complete SDLC requirements for this program. As figure 6 shows, most SDLC documents for phase one have either been completed out of sequence or have not been completed at all. If the Coast Guard does not adhere to the SDLC as prescribed, the agency runs the risk of missing early opportunities to identify and address problems if CG1V’s development falls behind schedule, runs over its budget, or does not meet user needs when it is deployed. Officials from the CIO’s office stated the importance of the phase one documentation to the integrity of the SDLC and stated that they intended to have the required documents from phase one completed and approved. They also stated that in following the SDLC for CG1V they would be encouraging other Coast Guard program offices to follow the SDLC for their projects. Moreover, a key official from the CIO’s office stated in January 2013 that while the SDLC has been around for almost 9 years, the process was slow to ramp up in the Coast Guard and there still remains a lack of awareness around the process. For example, he said that sometimes SDLC-required documents get drafted but the project sponsors do not see them as important and thus do not review them in a timely manner. He also noted that sometimes officials do not follow the process because they are working to meet a deadline. He added, however, that following the SDLC is important because it can help the Coast Guard achieve more successful implementation of new systems. The Coast Guard has increased the amount of information included in the COP and the number of users with access to COP information. However, the Coast Guard has encountered and continues to encounter many challenges in implementing its COP goals. These challenges are exemplified by the difficulty the Coast Guard has had with implementing C4ISR systems and COP tools such as WatchKeeper. We have documented these challenges individually in several prior reports but recognize here the broader impact of the challenges with these systems on the Coast Guard’s COP. For example, in 2011, we reported that the Coast Guard’s C4ISR project had not met its intended goals, and we see now in 2013 that its benefit to the COP has been more limited than originally planned. In 2012, we reported that the Coast Guard’s effort to implement WatchKeeper as a COP tool had made some progress, but the lack of port partners utilizing WatchKeeper and its inability to add information from local sensors jeopardized its purpose of improving information sharing and enhancing MDA across federal, state, and local port partners. These limitations have had another impact as well—as they have also affected the robustness and utility of information contained in the COP. In this review we again found IT implementation challenges—in this case related to the Coast Guard’s implementation of EGIS. These challenges, which Coast Guard officials acknowledged, resulted in numerous technical shortcomings and unsatisfied users. Coast Guard officials stated that some of EGIS’s implementation challenges could have been avoided if they had followed the SDLC process when developing EGIS. The Coast Guard is now developing a new COP-related technology, CG1V, and early efforts—such as not completing certain documentation and completing other documentation out of sequence—demonstrate that the Coast Guard was again not adhering to its own guidance. Although the Coast Guard has subsequently completed one of the documents required in the conceptual design phase and IT officials have expressed the intent to adhere to the SDLC process during the course of our review, there appears to still be some lack of awareness surrounding the SDLC process. Given the current budget environment and the resource- intensive nature of developing IT systems, the Coast Guard must be especially prudent in expending resources that help it accomplish its missions. Clarifying the applicability of the SDLC process mitigates risks of implementation challenges and maximizes the potential contribution of future technology development for the COP. To better ensure that the Coast Guard follows the SDLC as required, we recommend that the Commandant of the Coast Guard direct the Coast Guard Chief Information Officer to issue guidance clarifying the application of the SDLC for the development of future projects. We provided a draft of this report to the Department of Homeland Security for comment. In its written comments, reprinted in appendix II, DHS concurred with our recommendation. In addition, DHS provided technical comments, which we incorporated as appropriate. With regard to our recommendation, that the Coast Guard issue guidance clarifying the application of the SDLC for the development of future projects, DHS stated that the Coast Guard will review, clarify, and issue guidance related to the applicability of the SDLC process to mitigate risks of implementation challenges and maximize the potential contribution of future technology development for the COP. As arranged with your office, unless you publicly announce its contents earlier, we plan on no further distribution of this report until 30 days after its issue date. At that time we will send copies of this report to the Secretary of Homeland Security, the Commandant of the Coast Guard, and interested congressional committees as appropriate. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix III. System Description A system that provides commanders a single, integrated, scalable command and control system that fuses, correlates, filters, maintains and displays location and attribute information on friendly, hostile and neutral forces. It integrates this data with available intelligence and environmental information in support of command decision making. An application that displays base maps and charts, Coast Guard-specific information on facilities and waterways, as well as dynamic data relating to Coast Guard cases and activities. A Microsoft Windows-based system that displays the COP from a GCCS- based server that allows users to view near real-time situational awareness. C2PC enables users to view and edit the COP, apply overlays, display imagery, as well as send and receive tactical messages. A software system the Coast Guard uses for maritime search and rescue planning. It has the ability, among other things, to: handle multiple rescue scenarios, model pre-distress motion and hazards, account for the effects of previous searches, make requests and receive real-time data from an environmental data manually input wind and current information via a sketch tool using objective analysis techniques, and use the latest drift algorithms to project the drift of the survivors and craft. A system that monitors and tracks commercial vessels on the coast and in port areas using radar, cameras, and Automatic Identification System (AIS) sensors. WebCOP is an Internet browser-based viewer of the COP that features vessel profiling and access to unclassified databases including MISLE and the Ship Arrival Notification System. It also includes access to real-time video feeds, voice communications, and collaborative tools (chat). The Coast Guard’s geographic information system used to view and manage information about geographic places, analyze spatial relationships, and model spatial processes. EGIS can display this information on multiple viewers. As part of the Interagency Operations Center acquisition project, WatchKeeper is a Coast Guard system originally designed to gather data from sensors and port partner sources to provide situational awareness to Coast Guard personnel and port partners. Through WatchKeeper, Coast Guard personnel and port partners have access to the same data. The Coast Guard is currently in the process of disposing of the Hawkeye system. Stephen L. Caldwell, (202) 512-9610 or [email protected]. In addition to the contact named above, Dawn Hoff, Assistant Director; Jonathan Bachman; Bintou Njie; and Julian King made significant contributions to this report. In addition, William Carrigg and Karl Seifert provided technical assistance with information-technology issues; Michele Fejfar assisted with design and methodology; Tracey King provided legal support; Jessica Orr and Anthony Pordes provided assistance in report preparation; and Eric Hauswirth developed the report’s graphics.
|
To facilitate its mission effectiveness and share maritime situational awareness, the Coast Guard developed its COP--a map-based information system shared among its commands. The COP displays vessels, information about those vessels, and the environment surrounding them on interactive digital maps. COP information is shared via computer networks throughout the Coast Guard to assist with operational decisions. GAO was requested to evaluate the Coast Guard's development of COP-related systems. GAO assessed the extent to which the Coast Guard (1) has made progress in making information included in the COP available to users and any challenges it has encountered in implementing COP-related systems, and (2) followed its approved information technology development guidance when developing new technology. GAO conducted site visits to six Coast Guard sector commands and five district command centers, based on geography to engage a broad range of COP users, analyzed Coast Guard policies and documents, and interviewed Coast Guard headquarters officials managing the COP's development and implementation. The U.S. Coast Guard, a component of the Department of Homeland Security, has made progress in developing its Common Operational Picture (COP) by increasing the information in the COP and increasing user access to this information, but the Coast Guard has also faced challenges in developing COP-related systems. The Coast Guard has made progress by adding internal and external data sources that allow for better maritime domain awareness--the effective understanding of anything associated with the global maritime domain that could affect the United States. In addition, the COP has made information from these sources available to more COP users and decision makers throughout the Coast Guard. However, the Coast Guard has also experienced challenges in meeting the COP's goals and implementing systems to display and share COP information. For example, it experienced challenges when it deployed its Enterprise Geographic Information System (EGIS), a tool that did not meet user needs. The challenges Coast Guard personnel experienced with EGIS included system slowness and displays of inaccurate information. Our prior work found similar challenges with other Coast Guard COP-related systems not meeting intended objectives. For example, in February 2012, GAO reported that the intended information-sharing capabilities of the Coast Guard's WatchKeeper software, a major part of the $74 million Interagency Operations Center project, did not meet port partners' needs, in part, because the agency failed to determine these needs. The Coast Guard has not followed its own information technology development guidance when developing new technology. A recent example occurred in 2012 when the agency did not follow its System Development Life Cycle (SDLC) guidance during its initial development of Coast Guard One View (CG1V), its new planned COP viewer. The SDLC requires documents to be completed during specific phases of product development. The Coast Guard, however, did not follow this process during the early development of CG1V. Specifically, we found in February 2013, 9 months after CG1V had entered into the SDLC that the Coast Guard either had not created certain required documents or had created them outside of the sequence prescribed by the SDLC. For example, the SDLC-required tailoring plan is to provide a clear and concise listing of SDLC process requirements throughout the entire system lifecycle, and facilitates the documentation of calculated deviations from standard SDLC activities, products, roles, and responsibilities from the outset of the project. Though the SDLC clearly states that the tailoring plan is a key first step in the SDLC, for CG1V it was not written until after documents required in the second phase were completed. Coast Guard officials stated that this late completion of the tailoring plan occurred because the Coast Guard's Chief Information Officer had allowed the project to start in the second phase of the SDLC because they believed it was a proven concept. Without key phase one documents, the Coast Guard may have dedicated resources without knowing project costs. In October 2012, Coast Guard officials acknowledged the importance of following the SDLC process and stated their intent to complete the SDLC-required documents. Clarifying the application of the SDLC to new technology development would better position the Coast Guard to maximize the usefulness of the COP. GAO recommends that the Coast Guard clarify the application of the SDLC for the development of future technology projects. DHS concurred with our recommendation.
|
Economic growth—which is central to many of our major concerns as a society—requires investment, which, over the longer term, depends on saving. The nation’s saving consists of the private saving of households and businesses and the saving or dissaving of all levels of government. In general, government budget deficits represent dissaving—they subtract from national saving by absorbing funds that otherwise could be used for investment. Conversely, government surpluses add to saving. Since the 1970s, private saving has declined while federal budget deficits have consumed a large share of these increasingly scarce savings. The result has been to decrease the amount of national saving potentially available for investment. Since we last reported on this issue in 1995, private saving has remained low. However, federal budget deficits have declined significantly from the levels of the 1980s and early 1990s, freeing up some additional funds for investment. (See figure 1.) Nevertheless, total national saving and investment remain significantly below the levels experienced in the 1960s and 1970s. Economists have noted that these low levels of saving and investment raise concerns for the nation’s future productive capacity and future generations’ standard of living. As we have said in our earlier reports, the surest way to increase the resources available for investment is to increase national saving, and the most direct way for the federal government to increase national saving is to achieve and maintain a balanced federal budget. Running budget surpluses would further increase saving and allow the government to reduce the level of federal debt held by the public. Our earlier work concluded that without further policy action, commitments in federal retirement and health programs would together become progressively unaffordable for the nation over time, and the economic consequences would force belated and painful policy choices. Growing deficits and the resulting lower saving would lead to dwindling investment, slower growth, and finally a decline in real GDP. Living standards, in turn, would at first stagnate and then fall. These findings supported our conclusion that action on the deficit might be postponed, but it could not be avoided. The results of our past work have been very similar to the conclusions reached by other government entities and private analysts. Most notably, CBO published analyses based on its long-term model work in 1996 and 1997 that corresponded with our main findings. Also, in 1994-95, the Bipartisan Commission on Entitlement and Tax Reform reached similar conclusions in its study of future fiscal trends. Since our 1995 report, robust economic growth and policy action have combined to sharply reduce the deficit and are projected by CBO to result in budget surpluses in the near term. This report addresses the outlook for the budget over the longer term. We will explore how recent progress affects this outlook and the fiscal and economic impacts associated with alternative long-term fiscal policy strategies. In recent years, the federal deficit has declined substantially from $290 billion in fiscal year 1992—4.7 percent of GDP—to a CBO projected level of $23 billion in fiscal year 1997—0.3 percent of GDP, which would be the lowest level since 1974. This improvement is due, in part, to deficit reduction initiatives enacted in 1990 and 1993 as well as to subsequent spending restraint. The Balanced Budget Act of 1997, coupled with the strong recent performance of the economy, is expected to extend this recent progress by achieving a balanced budget in 2002 followed by several years of budget surpluses on a unified budget basis. The decline in the deficit has significantly slowed growth in the federal debt held by the public. As a share of GDP, this commonly used measure of federal debt is projected by CBO to decline from about 50 percent in fiscal year 1993 to 30 percent in 2007. The improving fiscal outlook over the near term carries longer term benefits as well, as illustrated by comparing our current “no action” simulation with our 1992 and 1995 modeling results. (See figure 2.) Our initial modeling work in 1992 indicated that even in the short term, prospective deficits would fuel a rapidly rising debt burden. Intervening economic and policy developments led to some improvement by the time we issued our 1995 report, as shown by a modest shift outward of the “no action” deficit path. Nonetheless, both our 1992 and 1995 “no action” simulations indicated that deficits would have reached 20 percent of GDP in the 2020s. In contrast, the 1997 “no action” path—which follows CBO’s 10-year forecast—indicates small and shrinking deficits over the next few years, followed by a decade of surpluses. Following the enactment of the BBA in 1997, our simulation indicates that deficits would not reach the 20-percent level until nearly 2050. For purposes of comparison, the highest deficit level reached since World War II was 6.1 percent of GDP in 1983. Figure 3 illustrates the improvement in the long-term outlook for the federal debt as a share of GDP stemming from recent policy actions and economic developments. These recent fiscal improvements represent substantial progress in the near term toward a more sustainable fiscal policy. However, longer term problems remain. As in our earlier work, a “no action” policy remains unsustainable over the long term. (See figure 2.) While the federal budget would be in surplus in the first decade of the 21st century, deficits would reemerge in 2012, soon after the baby boom generation begins to retire. These deficits would then escalate, exceeding 6 percent of GDP before 2030 and exceeding 20 percent of GDP by 2050. A comparison of federal debt to the size of the economy tells a similar story—near-term improvement followed by potentially unsustainable growth as the baby boomers retire. (See figure 3.) In the early years of the simulation period, budget surpluses produce a substantial reduction in the absolute size of the debt as well as in the relationship of debt to GDP, from today’s level of around 50 percent to about 20 percent in 2015. However, at that point, the debt to GDP ratio begins to rise rapidly, returning to today’s levels in the late 2020s and growing to more than 200 percent by 2050. Such levels of deficits and debt imply a substantial reduction in national saving, private investment, and the capital stock. Given our labor force and productivity growth assumptions, GDP would inevitably begin to decline. These negative effects of rapidly increasing deficits and debt on the economy would force action at some point before the end of the simulation period. Policymakers would likely act before facing probable consequences such as rising inflation, higher interest rates, and the unwillingness of foreign investors to invest in a weakening American economy. Therefore, as we have noted in our past work, the “no action” simulation is not a prediction of what will happen in the future. Rather, it underscores the need for additional action in the future to address the nation’s long-term fiscal challenges. The primary causes of the large deficits in the “no action” simulation are (1) the aging of the U.S. population, which corresponds to slower growth in the labor force and faster growth in entitlement program spending, and (2) the rising costs of providing federal health care benefits. In 2008, the first baby boomers will be eligible for early retirement benefits. As this relatively large generation retires, labor force growth is expected to slow considerably and, eventually, stop altogether. These demographic changes mean fewer workers to support each retiree. Between 1997 and 2025, the number of workers per Social Security beneficiary is projected to drop by 33 percent. Without a major increase in productivity, low labor force growth will inevitably lead to slower growth in the economy and in federal revenue. As slow growth in the labor force constrains revenue growth, the large retired population will place major expenditure demands on Social Security, Medicare, and Medicaid. In just 15 years, the Social Security trustees estimate that the program’s tax revenue is expected to be insufficient to cover current benefits. While the recent Balanced Budget Act included some actions to restrain growth in Medicare spending and increase income from beneficiary premiums, the program is still expected to grow faster than the economy over the next several years. According to CBO estimates, the Hospital Insurance Trust Fund portion of Medicare will be depleted in 2007, even before retiring baby boomers begin to swell the ranks of Medicare beneficiaries. Medicaid spending will also be under increasing pressure as the population ages because a large share of program spending goes to cover nursing home care. In the “no action” simulation, Social Security spending as a share of GDP increases by nearly 50 percent between now and 2030. By 2050, it approaches twice today’s level. Health care spending, fueled by both an increased number of beneficiaries and (in the early years of the simulation period) rising per beneficiary costs, would grow even more rapidly—doubling as a share of GDP by 2030 and tripling by 2050. As Social Security and health spending rise, their share of federal spending grows tremendously. (See figure 4.) By the mid-2040s, spending for these programs alone would consume more than 100 percent of federal revenues. After initially declining, interest spending also increases significantly in the “no action” simulation. In the early years of the simulation period, budget surpluses reduce the burden of interest spending on the economy. However, when the surpluses give way to deficits, this decline is reversed. Growing deficits add substantially to the national debt. Rising debt, in turn, raises spending on interest, which compounds the deficit problem, resulting in a vicious circle. The effects of compound interest are clearly visible in figure 5, as interest spending rises from about 3 percent of GDP in 1997 to over 12 percent in 2050. Alternatives to a “no action” policy illustrate the fiscal and economic benefits associated with maintaining a sustainable course. According to one definition, under a sustainable fiscal policy, existing government programs can be maintained without a continual rise in the debt as a share of GDP. Under an unsustainable policy, such as “no action,” the debt continually rises as a share of GDP. As illustrated in our past reports and CBO’s work, a number of different policy paths could be sustained over the long term. In our current work, we tested three different long-term fiscal strategies, one that would allow for modest deficits, one that would maintain a balanced budget, and one that would include an extended period of surpluses. (See figure 6.) The “constant debt burden” simulation follows the “no action” path through 2015. From this point on, the debt is held constant as a share of GDP, rather than increasing as in the “no action” simulation. To prevent the debt burden from rising from its 2015 level of about 20 percent of GDP, the federal government would have to hold annual deficits to roughly 1 percent of GDP. While not insignificant, this deficit level is relatively small compared to the federal deficits of recent years or to deficits in other industrial nations. For example, the European Union has established a deficit target of 3 percent of GDP for countries participating in the common currency arrangement. The “maintain balance” simulation also follows the “no action” path for the early part of the simulation period. In 2012—the year that deficits reemerge under “no action”—a balanced unified budget would be achieved. Balance would then be sustained through the remainder of the simulation period. Going beyond balance by running larger budget surpluses for a longer period of time than in the other simulations would yield additional economic benefits by further raising saving and investment levels. For our “surplus” simulation, we chose as a goal ensuring that annual Social Security surpluses (including interest that is credited to the fund) add to national saving. To achieve this goal, the federal government would run unified budget surpluses equal in size to the annual Social Security surpluses—which the Social Security Trustees estimate will peak at $140 billion in 2009. Such a policy means that the rest of the federal government’s budget would be in balance. Social Security’s surpluses (including interest income) are projected to end in 2018. Beginning in 2019, our simulation follows a unified budget balance identical to the path in our balance simulation. Figure 7 shows the debt-to-GDP paths associated with the various simulations. Under the “constant debt burden” simulation, the debt-GDP ratio remains around 20 percent, which is the lowest point reached in “no action.” Under both the “balance” and “surplus” simulations, the debt-GDP measure would decline to less than 10 percent of GDP—levels that the United States has not experienced since before World War I. Each of the alternative simulations would require some combination of policy or program changes that reduce spending and/or increase revenues. We make no assumptions about the mix of those changes in our analysis. We recognize that such actions would not be taken without difficulty. They would require difficult choices resulting in a greater share of national income devoted to saving. While consumption would be reduced in the short term, it would be increased over the long term. Early action would permit changes to be phased in and so give those affected by changes in, for example, Social Security or health care benefits, time to adjust. For both the federal government and the economy, any of the three alternative simulations indicates a vast improvement over the “no action” path. Sharply reduced interest costs provide the most striking budgetary benefit from following a sustainable policy. Currently, interest spending represents about 15 percent of federal spending, a relatively large share that is a consequence of the deficits of the 1980s and early 1990s. As noted above, after shrinking in the early years of the “no action” simulation, interest costs increase sharply over the long term. In contrast, under the alternative simulations, the interest burden shrinks dramatically. (See figure 8.) By 2050, under either a balance or surplus policy, interest payments would represent 1 percent or less of total spending. Even under the less austere “constant debt burden” simulation, interest would account for only about 5 percent of spending. The economic benefits of a sustainable budget policy include increased saving and investment levels and faster economic growth, which results in higher living standards. For example, under any of our alternative simulations, per capita GDP would nearly double between 1996 and 2050. In contrast, under “no action,” growth in living standards would slow considerably and living standards themselves would actually begin to decline around 2040. By 2050, they would be nearly 40 percent lower than under the balance simulation. This difference results from a wide gap in private investment. Under “no action,” large deficits eventually drive private investment spending down to zero while, for example, a balanced budget policy could produce a doubling of investment, as shown in table 1. In the “no action” simulation, capital depreciation would outweigh investment, resulting in a diminishing capital stock and, eventually, contributing to a falling GDP. Figure 9 compares the path of per capita GDP under “no action” to a balanced budget policy. This difference graphically shows the emerging gap in long-term living standards that results from shifting fiscal policy paths. Although the “maintain balance” path would lead to higher living standards, the rate of growth would be significantly lower than that experienced over the past 50 years. Such a rate would be extremely difficult to attain given the slowdown in productivity growth that has occurred in recent decades. Long-term economic simulations are a useful tool for examining the balance between the government’s future obligations and expected resources. This longer term perspective is necessary to understand the fiscal and spending implications of key government programs and commitments extending over a longer time horizon. The future implications of current policy decisions reflected in our simulations and in other financial reports are generally not captured in the budget process. The budget is generally a short-term, cash-based spending plan focusing on the short- to medium-term cash implications of government obligations and fiscal decisions. Accordingly, it does not provide all of the information on the longer term cost implications stemming from the government’s commitments when they are made. While the sustainability of the government’s fiscal policy is driven primarily by future spending for social security and health care commitments, the federal government’s commitments and responsibilities extend far beyond these programs. These commitments may, themselves, result in large costs that can encumber future fiscal resources and unknowingly constrain the government’s future financial flexibility to meet all its commitments as well as any unanticipated or emerging needs. Information about the cost of some of these commitments will be increasingly available as agencies produce audited financial statements. We anticipate that they will provide additional information on long-term commitments, including such items as environmental cleanup and insurance. For example, in its 1996 financial statements, the Department of Energy reported a cost of $229 billion to clean up its existing contaminated sites. The Department of Defense will also be developing and reporting cleanup costs in financial statements. The Office of Management and Budget has estimated that the government is likely to have to pay $31 billion in future claims resulting from the federal government’s insurance commitments. The first audited governmentwide financial statements will be issued for fiscal year 1997. This represents a key step in the government’s efforts to improve financial management and provide greater transparency and accountability for the costs of government commitments and programs. The key challenge facing budget decisionmakers is to integrate this information into the budget process. A range of options can be considered. A logical first step would be to include understandable supplemental financial information on the government’s long-term commitments and responsibilities in the budget. For example, in a recent report we concluded that supplemental reporting of accrual-based costs of insurance programs would improve recognition of the government’s commitments.Other options to refine the budget process or budget reporting to improve the focus on these commitments and prompt early action to address potential problems can be explored. For example, long-term simulations of current or proposed budget policies could be prepared periodically to help the Congress and the public assess the future consequences of current decisions. Another option, which would supplement the current practice of tracking budget authority and outlays, would be to provide information to permit tracking the estimated cost of all long-term commitments created each year in the budget. In this report, the analysis of alternative fiscal policy paths relies in substantial part on an economic growth model that we adapted from a model developed by economists at the FRBNY. The model reflects the interrelationships between the budget and the economy over the long term and does not capture their interaction during short-term business cycles. The main influence of budget policy on long-term economic performance is through the effect of the federal deficit on national saving. Conversely, the rate of economic growth helps determine the overall federal deficit or surplus through its effect on revenues and spending. Federal deficits reduce national saving while federal surpluses increase national saving. The level of saving affects investment and, in turn, GDP growth. Budget assumptions in the model rely, to the extent practicable, upon the baseline projections in CBO’s September 1997 report, The Economic and Budget Outlook: An Update, through 2006, the last year for which CBO projections are available in a format usable by our model. These estimates are used in conjunction with our model’s simulated levels of GDP. For Medicare, we assumed growth consistent with CBO’s projections and the Health Care Financing Administration’s long-term intermediate projections from the Medicare Trustees’ April 1997 report. For Medicaid through 2006, we similarly assumed growth consistent with CBO’s budget projections. For 2007 and thereafter, we used estimates of Medicaid growth from CBO’s March 1997 report, Long-Term Budgetary Pressures and Policy Options. For Social Security, we used the April 1997 intermediate projections from the Social Security Trustees throughout the simulation period. Other mandatory spending is held constant as a percentage of GDP after 2006. Discretionary spending and revenues are held constant as a share of GDP after 2006. Our interest rate assumptions are based on CBO through 2006 and then move to a fixed rate. (See appendix I for a more detailed description of the model and the assumptions we used.) We conducted our work from September to October 1997 in accordance with generally accepted government auditing standards. We received comments from experts in fiscal and economic policy on a draft of this report and have incorporated them as appropriate. We are sending copies of this report to the Ranking Minority Members of your Committees, interested congressional committees, the Director of the Congressional Budget Office, and the Director of the Office of Management and Budget. We will make copies available to others upon request. The major contributors to this report are listed in appendix II. If you have any questions concerning this report, please call me at (202) 512-9573. This update of GAO’s work on the long-term economic and budget outlook relies in large part on a model of economic growth developed by economists at the Federal Reserve Bank of New York (FRBNY). The major determinants of economic growth in the model include changes in the labor force, capital formation, and the growth in total factor productivity. To analyze the long-term effects of fiscal policy, we modified the FRBNY’s model to include a set of relationships that describe the federal budget and its links to the economy. The simulations generated using the model provide qualitative illustrations, not quantitative forecasts, of the budget or economic outcomes associated with alternative policy paths. The model depicts the links between the budget and the economy over the long term, and does not reflect their interrelationships during short-term business cycles. The main influence of budget policy on long-term economic performance in the model is through the effect of the federal deficit or surplus on national saving. Higher federal deficits reduce national saving while lower deficits or surpluses increase national saving. The level of saving affects investment and, hence, gross domestic product (GDP) growth. GDP is determined by the labor force, capital stock, and total factor productivity. GDP in turn influences nonfederal saving, which consists of the saving of the private sector and state and local government surpluses or deficits. Through its effects on federal revenues and spending, GDP also helps determine the federal budget deficit or surplus. Nonfederal and federal saving together constitute national saving, which influences private investment and the next period’s capital stock. Capital combines with labor and total factor productivity to determine GDP in the next period and the process continues. There are also important links between national saving and investment and the international sector. In an open economy such as the United States, a decrease in saving due to, for example, an increase in the federal budget deficit, does not require an equivalent decrease in investment. Instead, part of the saving shortfall may be filled by foreign capital inflows. A portion of the net income that results from such investments flows abroad. In this update, we retained the assumption in our prior work that net foreign capital inflows rise by one-third of any decrease in the national saving rate. Table I.1 lists the key assumptions incorporated in the model. The assumptions used tend to provide conservative estimates of the benefit of reducing deficits or running surpluses and of the harm of increasing deficits. The interest rate on the national debt is held constant, for example, even when deficits climb and the national saving rate plummets. Under such conditions, the more likely result would be a rise in the rate of interest and a more rapid increase in federal interest payments than our results display. Another conservative assumption is that the rate of total factor productivity growth is unaffected by the amount of investment. Productivity is assumed to advance 1 percent each year even if investment collapses. Such assumptions suggest that changes in deficits or surpluses could have greater effects than our results suggest. Interest rate (average on the national debt) Average effective rate implied by CBO’s interest payment projections through 2006; 5.1% thereafter (CBO’s 2006 implied rate) Follows CBO’s budget surplus/deficit as a percentage of GAO’s GDP through 2006; GAO simulations thereafter CBO through 2006; increases at the rate of economic growth thereafter CBO through 2006; increases at HCFA’s projected rate thereafter Follows the Social Security Trustees’ Alternative II projections CBO’s assumed levels through 2006; increases at the rate of economic growth thereafter CBO’s assumed levels through 2006; in subsequent years, receipts equal 19.9% of GDP (2006 ratio) We have made several modifications to the model, but the model’s essential structure remains the same as in our previous work. We have incorporated the change in the definition of government saving in the National Income and Product Accounts (NIPA) adopted in late 1995 by adding a set of relationships determining government investment, capital stock, and the consumption of fixed capital. The more recent data prompted several parameter changes. For example, the long-term inflation rate is now assumed to be 2.7 percent, down from 3.4 percent in our 1995 report and 4.0 percent in our 1992 report. In this update, the average federal borrowing rate steadily declines to 5.1 percent, compared to our assumption of 7.2 percent in 1995 and 7.8 percent in 1992. Our work also incorporates the marked improvement in the budget outlook stemming from the Balanced Budget Act of 1997 reflected in the 10-year budget projections that CBO published in September 1997. The distinction between the mandatory and discretionary components of the budget remains important. We adopted CBO’s assumption from their most recent 10-year forecast that discretionary spending equals the statutory caps from fiscal years 1998 through 2002 and increases at the rate of inflation from fiscal years 2003 through 2007. We assumed it would keep pace with GDP growth thereafter. Mandatory spending includes Health (Medicare and Medicaid), Old Age Survivors’ and Disability Insurance (OASDI, or Social Security), and a residual category covering other mandatory spending. Medicare reflects CBO’s assumptions through 2006 and increases at HCFA’s projected rate in subsequent years. Medicaid is based on CBO’s September 1997 assumptions; thereafter, it increases at the rates embodied in CBO’s March 1997 report on the long-term budget outlook. OASDI reflects the April 1997 Social Security Trustees’ Alternative II projections. Other mandatory spending is a residual category consisting of all nonhealth, non-Social Security mandatory spending. It equals CBO’s NIPA projection for Transfers, Grants, and Subsidies less Health, OASDI, and other discretionary spending. Through 2006, CBO assumptions are the main determinant of other mandatory spending, after which its growth is linked to that of GDP. The interest rates for 1997 through 2006 are consistent with the average effective rate implied by CBO’s interest payment projections. We assume that the average rate remains at the 2006 rate of 5.1 percent for the rest of the simulation period. Receipts follow CBO’s dollar projections through 2006. Thereafter, they continue at 19.9 percent of GAO’s simulated GDP, which is the rate projected for 2006. As these assumptions differ somewhat from those used in our earlier reports, only general comparisons of the results can be made. Budget Issues: Budgeting for Federal Insurance Programs (GAO/AIMD-97-16, September 30, 1997). Retirement Income: Implications of Demographic Trends for Social Security and Pension Reform (GAO/HEHS-97-81, July 11, 1997). Addressing the Deficit: Budgetary Implications of Selected GAO Work for Fiscal Year 1998 (GAO/OCG-97-2, March 14, 1997). Federal Debt: Answers to Frequently Asked Questions (GAO/AIMD-97-12, November 27, 1996). Budget Process: Evolution and Challenges (GAO/T-AIMD-96-129, July 11, 1996). Deficit Reduction: Opportunities to Address Long-standing Government Performance Issues (GAO/T-OCG-95-6, September 13, 1995). The Deficit and the Economy: An Update of Long-Term Simulations (GAO/AIMD/OCE-95-119, April 26, 1995). Deficit Reduction: Experiences of Other Nations (GAO/AIMD-95-30, December 13, 1994). Budget Issues: Incorporating an Investment Component in the Federal Budget (GAO/AIMD-94-40, November 9, 1993). Budget Policy: Prompt Action Necessary to Avert Long-Term Damage to the Economy (GAO/OCG-92-2, June 5, 1992). The Budget Deficit: Outlook, Implications, and Choices (GAO/OCG-90-5, September 12, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO updated its previous simulations of the long-term economic impact of federal budget policy following passage of the Balanced Budget Act of 1997. GAO noted that: (1) the balanced budget or surpluses that are projected in the Balanced Budget Act of 1997 would represent an enormous improvement in the federal government's fiscal position through the next 10 years; (2) the improvements in national saving and reduced debt and interest costs can be expected to produce tangible gains in economic growth and budgetary flexibility over the longer term as well; (3) as a result, the emergence of unsustainable deficits is substantially delayed under recently enacted fiscal policy; (4) if no further action were taken, GAO's simulations indicate that federal spending would grow faster than revenues soon after the baby boom generation begins to retire in 2008; (5) these higher spending levels would be driven would be driven by escalating health and Social Security costs; (6) rising interest costs would compound the deficit problem and take up an increasing share of the federal budget; (7) growing deficits, if unchecked, would eventually result in declining investment and capital stock and, inevitably, falling living standards; (8) over the long term, the "no action" scenario is unsustainable and timely policy action can avoid these economic consequences; (9) while a "no action" simulation is not a forecast of what will happen, it illustrates the nature of future fiscal challenges; (10) the alternative simulations illustrate the potential fiscal and economic benefits of achieving a sustainable budget policy; (11) a fiscal policy of balance through 2050 or extended periods of surplus, for example, could shrink the burden of federal interest costs considerably and also result in a larger economy over the long term; (12) all of these alternative policies would increase per capita GDP in 2050 by more than 35 percent over a "no action" policy, but they would require additional fiscal policy changes; (13) some changes would be difficult to achieve, but over the long term they would strengthen the nation's economy and overall living standards; (14) early action would permit changes in, for example, Social Security or health care benefits, time to adjust; (15) in considering what fiscal adjustments to make, policymakers need to be presented with more complete information on the costs of the government's existing long-term commitments; (16) the budget's current structure and reporting mechanisms have not focused attention on such commitment, nor has the budget process facilitated their explicit consideration; and (17) options to change budget reporting and process to improve recognition of these commitments and prompt early action warrant further exploration.
|
Most VA providers—including full- and part-time providers—are eligible for performance pay, but they may choose not to participate. Performance pay is given annually in a lump sum and may not exceed the lesser amount of $15,000 or 7.5 percent of a provider’s combined base and market pay, under the statute that defines the elements of provider pay. However, according to VHA officials, VA headquarters and each network and medical center have discretion to set a lower annual cap for their providers, and the amounts awarded may also depend on VA’s budget. Under the statute, performance pay is given on the basis of a provider’s achievement of specific goals and performance objectives, referred to in this report as goals. According to policy, performance pay goals can be established by medical center and network officials. As a result, the goals may be the same for all providers in the network, at a particular medical center, or within a particular specialty, or they may vary by individual provider. The amount of performance pay depends on the extent to which it is determined that a provider met his or her performance pay goals. Performance pay goals may include, for example, achieving a specific patient panel size, and are required by VA policy to be established within 90 days of the beginning of each fiscal year. VA’s performance pay policy requires medical centers to use form 10-0432 for each provider to document the goals and the provider’s achievement of them. When completed, the form is given to the medical centers’ respective human resources offices, which process the performance pay. Most providers are eligible for performance awards, although these awards are not required. The awards are lump-sum payments that are made annually and are based on providers’ annual performance reviews. VA policy requires that all providers—including full- and part-time providers—who have worked at VA medical centers receive annual performance reviews. For nonsupervisory providers, the reviews consist of a standard set of measures, including clinical, educational, and administrative competence; research and development; and personal qualities, such as dependability. A provider can receive an overall rating of unsatisfactory, low satisfactory, satisfactory, high satisfactory, or outstanding on an annual performance review. Service chiefs and chiefs of staff at VA medical centers, and providers at headquarters and networks who are in supervisory positions, also receive annual performance reviews based on measures outlined in VHA’s managerial performance plan. For nonsupervisory providers, the policy specifies that performance award amounts should not exceed $7,500. However, as with performance pay, VA headquarters, networks, and medical centers have discretion to determine the level of performance that would merit an award and the award amount within the $7,500 limit. For example, one medical center may grant performance awards to all providers who receive an overall performance rating of satisfactory or higher, while another medical center may only grant performance awards for an overall rating of outstanding, or not give them at all. VA policy states that VA medical centers may take actions, such as major adverse and disciplinary actions, against providers to address and correct deficiencies related to providers’ clinical performance. These actions range in severity from admonishment to termination of employment. In addition, VHA also has the option of taking privileging action—that is, reducing, revoking, denying, or failing to renew a provider’s clinical privileges—against medical center providers to address performance deficiencies. Medical centers have discretion in determining the type of action appropriate for each provider’s performance issue. Performance- related personnel actions may serve as an indication that the provider has not delivered high-quality, safe health care. VA’s performance pay policy gives VA’s 21 networks and 152 medical centers discretion in the establishment of performance pay goals for providers. The policy, issued in 2005 and revised in 2008, states that providers who meet established goals should receive performance pay. However, the policy does not provide an overarching purpose that the goals should support. VA’s Under Secretary for Health at the time the performance pay law was being considered stated in a congressional hearing that this pay would recognize providers’ achievements in quality, productivity, and support of the overall goals of the department. The Senate committee report and statements by members of Congress at the time the bill was passed provided that performance pay would recognize outstanding contributions to the medical center, to the care of veterans, or to the practice of medicine or dentistry, and it would motivate providers and ensure quality of care through the achievement of specific goals or objectives set for providers in advance.responsible for writing the performance pay policy also told us that the In addition, VA officials purpose of performance pay is to improve health care outcomes and quality; however, these goals are not documented in the policy. Officials at the four medical centers we visited differed in their views of what constituted appropriate goals for performance pay. The following are examples of the factors these officials thought should be considered when establishing performance pay goals. According to these officials, the goals should be objective and measurable, measure only clinical achievements, recognize performance that is above and beyond expectations, or be measured at the individual provider level to ensure that the provider has direct control over the achievement of the goals. As a result of these differing views, our review of the goals established for a mental health provider at each of four medical centers we visited found similarities as well as differences in the fiscal year 2011 goals. For example, one medical center used clinical goals exclusively, while others used a combination of goal types, such as clinical, patient satisfaction, and research. Table 1 includes examples of the types of goals established for a mental health provider at each of the four medical centers we visited. VHA officials told us that they have not formally reviewed the various goals that have been established by individual medical centers and networks to determine the purpose or purposes these goals support. In 2009, the Principal Deputy Under Secretary for Health asked that a group be convened to solicit and compile performance pay goals in order to review the types of goals being developed for each physician specialty and dental service across VA’s health care system. VHA officials told us at the time of our review that they had not yet done this, but planned to begin compiling and discussing a list of useful goals sometime in 2013. Because VHA has not reviewed the goals that have been set across medical centers and networks, it cannot have reasonable assurance that the goals established make a clear link between performance pay and providers’ performance. This condition is inconsistent with federal standards for internal control activities, which includes management of human capital. Of the eight providers who were the subject of performance-related personnel actions in fiscal year 2010 or 2011 at the four medical centers visited, three providers were not eligible for performance pay during the same fiscal year. Two of the three providers were not eligible because they were terminated or resigned before the end of the fiscal year, and the third was not eligible because the provider was placed on indefinite suspension without pay and was not practicing as of the end of the fiscal year. $7,663 in performance pay, 67 percent of the amount for which the provider was eligible. Another provider was reprimanded for refusing to see assigned patients waiting in the emergency department because the provider believed that patients had been triaged inappropriately. As a result, wait times increased. Documentation provided by the medical center indicated that, of the 98 patients who were triaged to the emergency department that day, 15 patients waited for over 6 hours to be seen and 9 patients left without being seen. That same fiscal year, this provider received $7,500 out of a maximum of $15,000 in performance pay. Specifically, this provider had 13 performance pay goals, which included becoming a member of a committee, attending staff meetings, and ensuring that all provider notes were signed in accordance with medical center policy. The provider met 1 of the 13 goals, which was assigned a weight of 50 percent. This goal was not specific to the individual provider, but instead was based on the achievements of all emergency department providers; it included meeting performance measures, such as maintaining productivity despite reduced resources, and adhering to the medical center policy for length of stay for patients in the emergency room. Since the medical center determined that the emergency department providers, which included this provider, met this goal, the provider received 50 percent of the maximum amount of performance pay. The service chief told us that his preference would have been to deny performance pay to this provider altogether, but he was told that the provider was entitled to the pay. In contrast to the performance pay policy, VA’s performance award policy clearly states the purpose of these awards—specifically, that they are to recognize sustained performance of providers beyond normal job requirements as reflected in the provider’s most recent performance rating. VA policy also lists the measures supervisors are to use to determine the performance rating for providers in nonsupervisory positions, such as clinical competence. VA’s performance pay policy is unclear about how to document compliance with two requirements—the goal-setting discussion between the supervisor and provider and the approval of the performance pay amount. For the documentation of goal-setting discussions, the policy states that supervisors are to discuss established goals with individual providers within 90 days of the beginning of the fiscal year, but it does not specify how medical centers are to demonstrate compliance with this requirement. VA’s policy specifies that form 10-0432 is to be used for documenting performance pay. The form has space at the top for listing the goals that a provider must meet to receive performance pay. The form includes a signature and date box for the supervisor and for the provider, respectively. (See fig. 1, section A.) The VA officials who wrote the policy, and VHA officials who are responsible for helping medical centers implement it, told us they expect that provider and supervisor signatures on the top of the form would indicate that goals have been discussed and the date would indicate when this discussion took place. The officials told us that this date is to verify that the 90-day requirement was met, but they have not documented or provided this guidance to the medical centers. Because the policy does not specify how compliance with the 90-day requirement should be documented, one of the four medical centers we visited did not interpret the policy the way VA and VHA officials did, and therefore, did not document compliance with the 90-day requirement when administering performance pay. Officials responsible for processing the performance pay form at this medical center told us that VA does not have a requirement for documenting compliance with the 90-day requirement and they do not believe form 10-0432 should be used for this purpose. At this medical center, we found that none of the forms we reviewed were signed by the supervisor and provider within 90 days of the beginning of the fiscal year, and instead all the forms were signed after the end of the fiscal year. Officials from the other three medical centers said that the form should be signed by the provider and supervisor within 90 days, or as soon as possible after the beginning of the fiscal year to indicate that the goals have been communicated. However, our review of the documentation from these three medical centers indicates that their forms were not always signed within 90 days. For the second documentation requirement, the policy states that the performance pay amount must be approved and that performance payments for the fiscal year must be disbursed no later than March 31 of the following fiscal year, but does not state that the approving official must sign form 10-0432 by that date. VA’s policy also states that the supervisor should forward form 10-0432 to the designated approving official for action. The bottom of form 10-0432 includes a signature and date box for an approving official, who is to sign and date the form. (See fig. 1, section B.) VA and VHA officials told us they expect that medical centers will not disburse performance pay, which they are required to do by March 31 or earlier, unless the approving official’s signature and date are on the form. However, VA’s policy does not state that the approving official’s signature must be dated before March 31. Because the policy does not specify when the approving official should sign the form 10-0432, officials at the four medical centers had not all interpreted and implemented the policy the way VA and VHA officials did, and the medical centers differed in how they documented approval when administering performance pay. For example, one medical center official who became responsible for processing these forms in 2011 told us that he does not look for the approving official’s signature to be dated by March 31. At this medical center, all of the fiscal year 2011 forms 10-0432 we reviewed were signed by the approving official after March 31, which indicates that the payments were made after the required disbursement date or that payments were made before they were approved. For fiscal year 2010, when a different official at this medical center was responsible for processing the performance pay forms, we found that nearly all of the forms were signed by the approving official by March 31. At the other three medical centers, we found that some forms were signed by the approving officials after March 31 or not at all, even though officials at these medical centers told us they strive to meet this date. Further, VA’s policy lacks a requirement for documenting whether performance-related personnel actions had an impact on providers’ achievement of performance pay goals, and as a result, how these actions affected performance pay decisions, such as reducing or denying this pay. Some VHA headquarters officials we interviewed said that situations involving providers who had a performance-related personnel action would need to be reviewed case by case to determine if the action had an impact on whether the provider met established goals, since not all performance-related personnel actions would merit reducing or denying a provider’s performance pay. These officials told us they expected medical center officials to document their review of these actions when determining whether to give performance pay to these providers. However, these expectations are not explicit in VA’s performance pay policy. The documents provided by the four medical centers for five providers who had performance-related personnel actions did not include documentation that the actions were considered. Some medical center officials told us that they did not consider the performance- related personnel actions when making performance pay determinations, while others told us that they did but that they did not document it. Without a performance pay policy that clearly specifies how to document decisions or compliance with requirements, VHA does not have reasonable assurance that documentation includes all necessary information, a condition that is inconsistent with federal standards for internal control activities. In addition, medical centers will likely continue to vary in their interpretation of the policy for documenting goal-setting within 90 days of the beginning of the fiscal year, and subsequent approval of performance pay and the extent to which they document compliance with it. As a result, VHA does not have reasonable assurance that medical centers are complying with requirements.without documentation of whether performance-related personnel actions affected performance pay decisions, VHA lacks information about how these decisions were made and whether these decisions appropriately reflected providers’ performance. VHA does not provide adequate oversight to ensure that its medical centers are in compliance and remain in compliance with performance pay and award requirements. VHA’s Workforce Management and Consulting Office conducts annual Consult, Assist, Review, Develop, and Sustain (CARDS) reviews, which are consultative reviews that were initiated in 2011 to assist medical centers in complying with human resources requirements, including performance award requirements.According to VHA officials, the results of these reviews are provided to the medical centers. However, these reviews have limitations and, as a result, do not always ensure that medical centers comply and remain in compliance with human resources requirements. The five CARDS reviewers take turns in the lead CARDS reviewer role on a yearly basis. This individual was the lead CARDS reviewer during our review, and as such can speak for all of the reviewers. centers. CARDS reviews use a standard list of elements to review other human resources requirements, including performance awards. In addition, the CARDS reviewers do not have the authority to require medical centers to resolve compliance problems they identify, and VHA has not formally assigned responsibility to an organizational component with the knowledge and expertise of human resources issues to do this, a condition that is inconsistent with internal control standards for control environment. currently indicate that network human resources managers, who typically accompany CARDS reviewers on reviews, are to follow up with medical centers’ human resources offices to ensure identified problems are resolved. However, this reviewer said CARDS reviewers do not have the authority to require that this follow-up be done. Further, network human resources managers lack the authority to require medical center human resources managers to correct identified problems because medical center directors, not network human resources managers, typically have oversight authority over medical center human resources managers, according to VHA officials. Figure 2 shows the organization of VHA offices that are involved in performance pay and awards for providers at medical centers. GAO/AIMD-00-21.3.1. network director reports directly to the Deputy Under Secretary for Health for Operations and Management. Generally, the applicable network human resources official attends the CARDS review. As a result of the limitations with CARDS reviews—the lack of a standard list of performance pay elements, as well as the lack of an organizational component assigned to follow up on noncompliance and ensure it is corrected—VHA is unable to ensure that medical centers correct the problems found by the reviews and that problems do not recur, a condition that is inconsistent with internal control standards for monitoring. We found that two of the four VA medical centers we visited did not always correct problems identified through CARDS reviews. For example, a May 2011 CARDS review of one of these two medical centers found that the medical center did not conduct a formal evaluation of its awards program, as required. A CARDS review of this same medical center about a year later found the identical problem. An October 2011 CARDS review of the second medical center found that the facility was not using the required form 4659 for performance awards, and we found that the same medical center did not use form 4659 for performance awards in fiscal years 2010 and 2011. We also found that another medical center was not using form 4659 for performance awards in fiscal years 2010 and 2011. Additionally, we found other instances of noncompliance at two of the four medical centers. For example, we found that one of the medical centers we visited used form 4659 for performance awards, but was unable to provide the form for two of the five providers who received awards in fiscal year 2011. Further, we found that another medical center was unable to provide the required form 10- 0432 for performance pay for six of the providers we reviewed in fiscal years 2010 or 2011. Also, for the providers’ forms that were available in fiscal year 2010, two forms did not indicate whether the goals were met to justify the performance pay amounts, as required by VA policy. Part of VHA’s responsibility for administering performance pay and awards is ensuring that providers understand the link between this compensation and their performance, according to federal internal control standards. However, VA’s performance pay policy does not state a purpose for this pay, and VHA, which administers this pay, has not reviewed the performance pay goals that have been established across VA medical centers and networks. Without stating a purpose for the pay and reviewing the goals, VHA cannot determine the purposes these goals support, and the relationship between performance pay and providers’ performance is unclear. All of the providers we reviewed who were eligible for performance pay in fiscal year 2010 or 2011 received this pay, including providers who had performance-related personnel actions taken against them. Because VA’s policy is silent on documenting whether performance-related personnel actions affected performance pay, none of the medical centers provided documentation that these actions were considered in making performance pay decisions. As a result, VHA lacks information about how these decisions were made and whether these decisions reflect providers’ performance. Moreover, because VA’s policy does not specify how compliance should be documented for certain performance pay requirements, such as discussion of goals and approval of amounts, VHA cannot ensure consistent compliance across its medical centers. In addition, oversight of medical centers’ management of performance pay and awards is not adequate for VHA to have reasonable assurance that medical centers fully comply with requirements. VHA has not assigned responsibility to an organizational component to follow up on identified problems at medical centers, including problems identified during CARDS reviews, to ensure that they are corrected and remain corrected. Oversight that does not ensure that identified problems are resolved and remain so is inconsistent with federal standards for internal control, and may allow compliance problems to persist or worsen. To clarify VA’s performance pay policy, we recommend that the Secretary of Veterans Affairs direct the Assistant Secretary for Human Resources and Administration to take the following four actions to specify in policy: the overarching purpose of performance pay; how medical centers should document that supervisors have discussed performance pay goals with providers within the first 90 days of the fiscal year; that medical centers should document approval of performance pay amounts and that the approval occurred before the required March 31 disbursement date; and how medical center officials should document whether performance- related personnel actions had an impact on providers’ achievement of performance pay goals, and as a result, affected performance pay decisions. To ensure that performance pay goals are consistent with the overarching purpose that VA specifies for this pay, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to review existing performance pay goals across VA’s health care system. To strengthen oversight of medical centers’ compliance with VA policy requirements for performance pay and awards, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following two actions: ensure medical centers are in compliance with the requirements in the performance pay and award policies and assign responsibility to a VHA organizational component with the knowledge and expertise to ensure correction of medical centers’ noncompliance with VA’s performance pay and award policy requirements, including problems identified during CARDS reviews, and ensure that medical centers maintain compliance with these requirements. VA provided written comments on a draft of this report, which we have reprinted in appendix II.conclusions and recommendations. In its comments, VA generally agreed with our In response to our recommendations to clarify the performance pay policy, VA stated that it concurred with three of them and concurred in principle with the fourth. Regarding the overarching purpose of performance pay, VA stated that it will coordinate with VHA to develop a policy change that will clearly articulate the purpose of performance pay, which is to ensure and improve the quality of care through the achievement of specific goals and objectives. In response to our recommendations that VA specify how medical centers should document the goal-setting discussion between the supervisor and provider, and the approval of the performance pay amount, VA stated that it would revise the performance pay policy to include more detailed instructions for documenting compliance with these two requirements. In addition, VA said it would revise form 10-0432 to include sections for documenting compliance with these two requirements. VA stated that it agreed in principle with our recommendation that medical centers should document whether performance-related personnel actions had an impact on providers’ achievement of performance pay goals, and affected performance pay decisions. VA agreed that medical center officials should consider whether a performance-related personnel action had an impact on the provider’s achievement of the goals and objectives associated with performance pay, but stated that it is inappropriate to require the documentation of the decision on form 10-0432. We appreciate VA’s commitment to clarify in policy that medical center officials should consider performance-related personnel actions when making performance pay decisions. We support VA’s flexibility as to where to document these considerations, which is why we did not specify in our recommendation that form 10-0432 should be used for this purpose. However, we continue to believe that if such considerations are not documented, VHA lacks information about how these decisions were made and whether these decisions appropriately reflected providers’ performance, which is inconsistent with federal internal control standards for documentation. To address our recommendation that the Under Secretary for Health review the existing performance pay goals across VA’s health care system to ensure that performance pay goals are consistent with the purpose specified in policy, VA stated that the Under Secretary for Health directed a committee on February 11, 2013, to conduct a review of policies and controls associated with the administration of performance pay, including evaluating challenges associated with establishing performance pay goals, inconsistent application of performance pay, and the overall perceived value of performance pay. In June 2013, the Under Secretary for Health directed a new task force to build upon the work of that committee and make recommendations for ensuring a consistent and system-wide process for setting and evaluating performance pay goals and granting performance pay, including making recommendations for reviewing and setting uniform performance pay goals across the system that are aligned to help VHA achieve its goals. To address our recommendations to strengthen oversight of medical centers’ compliance with VA policy requirements for performance pay and awards, VA stated that the Under Secretary for Health established in May 2013 a task force to develop and provide guidance and methodology for performance pay. In addition, VA stated that VHA’s Office of the Deputy Under Secretary for Health for Operations and Management will assign responsibility to the network directors, in coordination with the network human resources managers, network chief medical officers, medical center directors, medical center chiefs of staff, and medical center human resources managers, for monitoring and enforcing VA’s performance pay and award policies, and communicate that responsibility in a memorandum. Additionally, the Office of the Deputy Under Secretary for Health for Operations and Management will communicate in a memorandum to the network directors and human resources managers that they should monitor and track CARDS reviews and coordinate with the medical center directors and human resources managers to ensure proper corrective actions are taken for compliance. VA also provided a general comment that it considered our definition of performance-related personnel actions, defined in footnote 5 of the report, to be too broad. Specifically, VA stated that performance actions are taken when an employee lacks the skill and ability to perform assigned duties, and that in such situations providers are given assistance and an opportunity period to perform at an acceptable level of competence. VA also stated that if the provider fails to perform at an acceptable level during this period, the resulting performance action will be either a reduction of privileges or termination. However, VA’s policy on employee/management relations states that disciplinary actions—which include admonishments and reprimands—and major adverse actions— which include suspension, transfer, reduction in grade, and reduction in basic pay, in addition to termination—can be taken to address performance or conduct. In addition, VA stated that four of the five scenarios listed in appendix I of the report were conduct actions, not performance actions. As stated in footnote 5 of the report, we created the term performance-related personnel actions to include any action taken to address clinical performance—that is, an action that medical centers’ documentation indicated was related to patient safety or quality. Documentation provided by the medical centers for each of the five cases listed in appendix I clearly stated that the actions that were taken against the providers were related to patient safety or quality. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Veterans Affairs, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Performance-Related Personnel Actions and Performance Pay Amounts A reprimand is an official letter of censure to a provider for deficiencies in competence. Reprimands can also be issued for acts of misconduct. A reprimand is a more severe disciplinary action than an admonishment, which is an official letter of censure to a provider for minor deficiencies in competence or conduct. VA officials did not provide documentation of the letter of alternative discipline but indicated that it was probably a reprimand. We included this case in our sample based on documentation provided by the medical center, which indicated that the action was closed in fiscal year 2010. In addition to the contact named above, Mary Ann Curran, Assistant Director; Elizabeth Conklin; Kaitlin McConnell; Elizabeth T. Morrison; Lisa Motley; and Christina Castillo Serna made key contributions to this report.
|
VHA administers VA's health care system and strives to provide high-quality, safe care to veterans. Concerns continue about the quality of care VHA delivers, but many physicians and dentists, referred to as providers, receive performance-based pay and awards. In fiscal year 2011, about 80 percent of VHA's nearly 22,500 providers received approximately $150 million in performance pay, and about 20 percent received more than $10 million in performance awards. GAO was asked to review VHA's performance pay and award systems. This report examines (1) whether VA's performance pay and award policies ensure appropriate administration of this compensation and (2) VHA's oversight of medical centers' compliance with policy requirements. GAO reviewed documents and interviewed VA and VHA officials about the administration of performance pay and awards and VHA's oversight of the related policy requirements; analyzed data from a random sample of about 25 providers selected primarily from primary care, surgery, psychiatry, and dentistry at each of four medical centers GAO visited that had at least one provider who was the subject of an action related to clinical performance. The Department of Veterans Affairs' (VA) performance pay policy has gaps in information needed to appropriately administer this type of pay. The performance pay policy gives VA's 152 medical centers and 21 networks discretion in setting the goals providers must achieve to receive this pay, but does not specify an overarching purpose the goals are to support. VA officials responsible for writing the policy told us that the purpose of performance pay is to improve health care outcomes and quality, but this is not specified in the policy. Moreover, the Veterans Health Administration (VHA) has not reviewed the goals set by medical centers and networks and therefore does not have reasonable assurance that the goals make a clear link between performance pay and providers' performance. Among the four medical centers GAO visited, performance pay goals covered a range of areas, including clinical, research, teaching, patient satisfaction, and administration. At these medical centers, all providers GAO reviewed who were eligible for performance pay received it, including all five providers who had an action taken against them related to clinical performance in the same year the pay was given. The related provider performance issues included failing to read mammograms and other complex images competently, practicing without a current license, and leaving residents unsupervised during surgery. Moreover, VA's policy is unclear about how to document certain decisions related to performance pay. For example, the policy does not provide clear guidance on what to document regarding whether a provider's performance-related action should result in the reduction or denial of the provider's performance pay. In contrast to the performance pay policy, VA's performance award policy clearly states the purpose of these awards-- specifically, that they are to recognize sustained performance of providers beyond normal job requirements as reflected in the provider's most recent performance rating. VA policy also lists the measures, such as clinical competence, that providers' supervisors are to use to determine these providers' performance rating. VHA's oversight is inadequate to ensure that medical centers comply with performance pay and award requirements. VHA's annual consultative reviews, initiated in 2011, help medical centers comply with human resources requirements, including performance award requirements. Recently, these reviews began to also include performance pay requirements, but do not yet include a standard list of performance pay elements to review, which would be needed to ensure consistency of reviews across medical centers. Further, reviewers do not have the authority to require medical centers to resolve compliance problems they identify, and VHA has not formally assigned specific organizational responsibility to ensure medical centers resolve identified problems. As a result, VHA is unable to ensure that reviews consistently identify problems, and that these problems are corrected and do not recur. GAO found that two of the four medical centers visited did not always correct problems identified through these reviews. For example, a May 2011 review of one of these two medical centers found that the medical center did not conduct a formal evaluation of its performance award program, as required. A review of the same medical center about a year later found the identical problem. GAO recommended that VA clarify the performance pay policy, by specifying the purpose and documentation requirements and that VHA review performance pay goals for consistency with the purpose, and improve oversight to ensure compliance. VA generally agreed with GAO's conclusions and recommendations.
|
The national park system has 376 units. These park units have over 16,000 permanent structures, 8,000 miles of roads, 1,500 bridges and tunnels, 5,000 housing units, about 1,500 water and waste systems, 200 radio systems, over 400 dams, and more than 200 solid waste operations. According to the Park Service, these facilities are valued at over $35 billion. Needless to say, the proper care and maintenance of the national parks and their supporting infrastructure is essential to the continued use and enjoyment of our great national treasures by this and future generations. However, for years Park Service officials have highlighted the agency’s inability to keep up with its maintenance needs. In this connection, Park Service officials and others have often cited a continuing buildup of unmet maintenance needs as evidence of deteriorating conditions throughout the national park system. The accumulation of these unmet needs has become commonly referred to by the Park Service as its “maintenance backlog.” The reported maintenance backlog has increased significantly over the past 10 years—from $1.9 billion in 1987 to about $6.1 billion in 1997. Recently, concerns about the maintenance backlog situation within the National Park Service, as well as other federal land management agencies, have led the Congress to provide significant new sources of funding. These additional sources of funding were, in part, aimed at helping the agencies address their maintenance problems. It is anticipated that new revenues from the 3-year demonstration fee program will provide the Park Service over $100 million annually. In some cases, the new revenues will as much as double the amount of money available for operating individual park units. In addition, funds from a special one-time appropriation from the Land and Water Conservation Fund may also be available for use by the Park Service in addressing the maintenance backlog. These new revenue sources are in addition to the $300 million in annual operating appropriations which are used for maintenance activities within the agency. In 1997, in support of its fiscal year 1998 budget request, the Park Service estimated that its maintenance backlog was about $6.1 billion.Maintenance is generally considered to be work done to keep assets—property, plant, and equipment—in acceptable condition. It includes normal repairs and the replacement of parts and structural components needed to preserve assets. However, the composition of the maintenance backlog estimate provided by the Park Service includes activities that go beyond what could be considered maintenance. Specifically, the Park Service’s estimate of its maintenance backlog includes not only repair and rehabilitation projects to maintain existing facilities, but also projects for the construction of new facilities. Of the estimated $6.1 billion maintenance backlog, most of it—about $5.6 billion, or about 92 percent—are construction projects. These projects, such as building roads and utility systems, are relatively large and normally exceed $500,000 and involve multiyear planning and construction activities. According to the Park Service, the projects are intended to meet the following objectives: (1) repair and rehabilitation; (2) resource protection issues, such as constructing or rehabilitating historic structures and trails and erosion protection activities; (3) health and safety issues, such as upgrading water and sewer systems; (4) new facilities in older existing parks; and (5) new facilities in new and developing parks. Appendix I of this testimony shows the dollar amounts and percentage of funds pertaining to each of the project objectives. The Park Service’s list of projects in the construction portion of the maintenance backlog reveals that over 21 percent, or $1.2 billion, of the $5.6 billion is for new facilities. We visited four parks to review the projects listed in the Park Service’s maintenance backlog estimates for those parks and found that the estimates included new construction projects as part of the backlog estimate. For example: Acadia National Park’s estimate included $16.6 million to replace a visitor center and construct a park entrance. Colonial National Historical Park included $24 million to build a Colonial Parkway bicycle and walking trail. Delaware Water Gap National Recreation Area included $19.2 million to build a visitor center and rehabilitate facilities. Rocky Mountain National Park included $2.4 million to upgrade entrance facilities. While we do not question the need for any of these facilities, they are directed at either further development of a park or modifications of and improvements to existing facilities in parks to meet the visions that park managers wish to achieve for their parks. These projects are not aimed at addressing the maintenance of existing facilities within the parks. As a result, including these types of projects in the maintenance backlog contributes to confusion about the actual maintenance needs of the national park system. In addition to projects clearly listed as new construction, other projects on the $5.6 billion list that are not identified as new construction, such as repair and rehabilitation of existing facilities, also include substantial amounts of new construction. Our review of the projects for the four parks shows that each included large repair and rehabilitation projects that contained tasks that would not be considered maintenance. These projects include new construction for adding, expanding, and upgrading facilities. For example, at Colonial National Historical Park, an $18 million project to protect Jamestown Island and other locations from erosion included about $4.7 million primarily for new construction of such items as buildings, boardwalks, wayside exhibits, and an audio exhibit. Beyond construction items, the remaining composition of the $6.1 billion backlog estimate—about 8 percent, or about $500 million—consists of smaller maintenance projects that include such items as rehabilitating campgrounds and trails and repairing bridges, and other items that recur on a cyclic basis, such as reroofing or repainting buildings. Excluded from the Park Service’s maintenance backlog figures is the daily, park-based operational maintenance to meet routine park needs, such as janitorial and custodial services, groundskeeping, and minor repairs. The Park Service compiles its maintenance backlog estimates on an ad hoc basis in response to requests from the Congress or others; it does not have a routine, systematic process for determining its maintenance backlog. The January estimate of the maintenance backlog—its most recent estimate—was based largely on information that was compiled over 4 years ago. This fact, as well as the absence of a common definition of what should be included in the maintenance backlog, contributed to an inaccurate and out-of-date estimate. Although documentation showing the maintenance backlog estimate of $6.1 billion was dated January 1997, for the most part, the Park Service’s data were compiled on the basis of information received from the individual parks in December 1993. A Park Service official stated that the 1993 data were updated by headquarters to reflect projects that had been subsequently funded during the intervening years. However, at each of the parks we visited in preparing for today’s testimony, we found that the Park Service’s most recent maintenance backlog estimate for each of the parks was neither accurate nor current. The four parks’ estimates of their maintenance backlog needs ranged from about $40 million at Rocky Mountain National Park to $120 million at Delaware Water Gap National Recreation Area. Our analysis of these estimates showed that they varied from the headquarters estimates by about $3 million and $21 million, respectively. The differences occurred because the headquarters estimates were based primarily on 4-year old data. According to officials from the four parks, they were not asked to provide specific updated data to develop the 1997 backlog estimate. The parks’ estimates, based on more current information, included such things as updated lists reflecting more recent projects, modified scopes, and more up-to-date cost estimates. For example, Acadia’s estimate to replace the visitor center and construct a park entrance has been reduced from $16.6 million to $11.6 million; the Delaware Water Gap’s estimate of $19.2 million to build a visitor center and rehabilitate facilities has been reduced to $8 million; and Rocky Mountain’s $2.4 million project to upgrade an entrance facility is no longer a funding need because it is being paid for through private means. In addition, one of the projects on the headquarters list had been completed. The Park Service has no common definition as to what items should be included in an estimate of the maintenance backlog. As a result, we found that officials we spoke to in Park Service headquarters, two regional offices, and four parks had different interpretations of what should be included in the backlog. In estimating the maintenance backlog, some of these officials would exclude new construction; some would include routine, park-based maintenance; and some would include natural and cultural resource management and land acquisition activities. In addition, when the Park Service headquarters developed the maintenance backlog estimate, it included both new construction and maintenance-type items in the estimate. For example, nonmaintenance items, such as adding a bike path to a park where none now exists or building a new visitor center, are included. The net result is that the maintenance backlog estimate is not a reliable measure of the maintenance needs of the national park system. In order to begin addressing its maintenance backlog, the Park Service needs (1) accurate estimates of its total maintenance backlog and (2) a means for tracking progress so that it can determine the extent to which its needs are being met. Currently, the agency has neither of these things. Yet, the need for them is more important now than ever before because in fiscal year 1998, over $100 million in additional funding is being made available for the Park Service that it could use to address its maintenance needs. This additional funding comes from the demonstration fee program. Also, although the exact amount is not yet known, additional funding may be made available from the Land and Water Conservation Fund. Park Service officials told us that they have not developed a precise estimate of the total maintenance backlog because the needs far exceed the funding resources available to address them. In their view, the limited funds available to address the agency’s maintenance backlog dictate that managers focus their attention on identifying only the highest priority projects on a year-to-year basis. Since the agency does not focus on the total needs but only on priorities for a particular year, it cannot determine whether the maintenance conditions of park facilities are improving or worsening. Furthermore, without information on the total maintenance backlog, it is difficult to fully measure what progress is being made with available resources. The recent actions by the Congress to provide the Park Service with substantial additional funding, which could be used to address its maintenance backlog, further underscores the need to ensure that available funds are being used to address those needs and to show progress in improving the conditions of the national park system. The Park Service estimates that the demonstration fee program could provide over $100 million a year to address the parks’ maintenance and other operational needs. In some parks, revenue from new and increased fees could as much as double the amount of money that has been previously available for operating individual park units. In addition to the demonstration fee program, fiscal year 1998 was the first year that appropriations from the Land and Water Conservation Fund could be used to address the maintenance needs of the national park system. However, according to Park Service officials, the exact amount provided from this fund for maintenance will not be determined until sometime later this month. Two new requirements that have been imposed on the Park Service, and other federal agencies, should, if implemented properly, help the agency to better address its maintenance backlog. These new requirements involve (1) changes in federal accounting standards and (2) the Government Performance and Results Act (the Results Act). Recent changes in federal accounting standards require federal agencies, including the Park Service, to develop better data on their maintenance needs. The standards define deferred maintenance and require that it be disclosed in agencies’ financial statements beginning with fiscal year 1998. To implement these standards, the Park Service is part of a facilities maintenance study team that has been established within the Department of the Interior to provide the agency with deferred maintenance information as well as guidance on standard definitions and methodologies for improving the ongoing accumulation of this information. In addition, as part of this initiative, the Park Service is doing an assessment of its assets to show whether they are in poor, fair, or good condition. This condition information is essential to providing the Park Service with better data on its overall maintenance needs. Furthermore, it is important to point out that as part of the agency’s financial statements, the accuracy of the Park Service’s deferred maintenance estimates will be subjected to annual audits. This audit scrutiny is particularly important given the long-standing concerns reported by us and others about the validity of the data on the Park Service’s maintenance backlog estimates. The Results Act should also help the Park Service to better address its maintenance backlog. In carrying out the Results Act, the Park Service is requiring its park managers to measure progress in meeting a number of key goals, including whether and to what degree the conditions of park facilities are being improved. If properly implemented, this requirement should make the Park Service as a whole, as well as individual park managers, more accountable for how it spends maintenance funds to improve the condition of park facilities. Once in place, this process should permit the Park Service to better demonstrate what is being accomplished with its funding resources. This is an important step in the right direction since our past work has shown that the Park Service could not hold park managers accountable for their spending decisions because they did not have a good system for tracking progress and measuring results. Mr. Chairman, this completes my statement. I would be happy to answer questions from you or any other Members of the Subcommittee. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed: (1) the Park Service's estimate of the maintenance backlog and its composition; (2) how the agency determined the maintenance backlog estimate and whether it is reliable; and (3) how the agency manages the backlog. GAO noted that: (1) the Park Service's estimate of its maintenance backlog does not accurately reflect the scope of the maintenance needs of the park system; (2) the Park Service estimated, as of January 1997, that its maintenance backlog was about $6.1 billion; (3) most of this amount--about $5.6 billion, or about 92 percent--was construction projects; (4) of this amount, over 21 percent or $1.2 billion was for the construction of new facilities; (5) while GAO does not question the need for these facilities, including these kinds of new construction projects or projects that expand or upgrade park facilities in an estimate of the maintenance backlog is not appropriate because it goes beyond what could reasonably be viewed as maintenance; (6) as a result, including these projects in the maintenance backlog contributes to confusion about the actual maintenance needs of the national park system; (7) the Park Service's estimate of its maintenance backlog is not reliable; (8) its maintenance backlog estimates are compiled on an ad hoc basis in response to requests from Congress or others; (9) the agency does not have a routine, systematic process for determining its maintenance backlog; (10) the most recent estimate, as of January 1997, was based largely on information that was compiled by the Park Service over 4 years ago and has not been updated to reflect changing conditions in individual park units; (11) this fact, as well as the absence of a common definition of what should be included in the maintenance backlog, contributes to an inaccurate and out-of-date estimate; (12) the Park Service does not use the estimated backlog in managing park maintenance operations; (13) as such, it has not specifically identified its total maintenance backlog; (14) since the backlog far exceeds the funding resources being made available to address it, the Park Service has focused its efforts on identifying the highest-priority maintenance needs; (15) however, given that substantial additional funding resources can be used to address maintenance--over $100 million starting in fiscal year (FY) 1998--the Park Service should more accurately determine its total maintenance needs and track progress in meeting them so that it can determine the extent to which they are being met; (16) the Park Service is beginning to implement the legislatively mandated management changes in FY 1998; and (17) these changes could, if properly implemented, help the Park Service develop more accurate data on its maintenance backlog and track progress in addressing it.
|
EDA’s primary focus is to help regions experiencing long-term economic distress or sudden economic dislocation (brought about by, for example, plant closure or natural disaster) through public infrastructure investments, technical assistance and research, and the development and implementation of comprehensive economic development strategies. EDA’s two largest grant assistance programs are Public Works and EAA. The Public Works program is used to finance infrastructure-related activities that support job creation, such as water and sewer facilities, industrial parks and business centers, broadband facilities, port and rail improvements, and business incubator facilities. The EAA program is used to fund strategic planning and implementation activities, including the same activities eligible under Public Works grants. PWEDA establishes criteria for the (1) types of entities that are eligible for assistance; (2) economic distress characteristics of geographic areas in which projects can be located, with specific thresholds for per capita income and unemployment rates; and (3) factors EDA must consider when awarding grants (see table 1). While proposed grant projects generally must be located in an area that is experiencing at least one of the distressed circumstances, applicants are not required to be physically located in these areas. EDA funds for the Public Works and EAA programs are publicly announced in annual Economic Development Assistance Program (EDAP) Federal Funding Opportunities (FFO) and competitively awarded to eligible entities, as shown in table 1. Grant- making authority is decentralized among the agency’s six regional offices, whose primary responsibility is to review requests for EDA funding, provide technical assistance, and administer EDA grants. EDA uses multiple levels of review to identify competitive economic development investment proposals: Pre-application review (optional). Regional office staff provides technical assistance to applicants prior to formal application. Technical review. Regional office staff assess the timeliness and completeness of applications, including whether the applications meet the eligible entity and economic distress criteria. Project analysis review. A regional representative assesses complete applications’ responsiveness to the specific requirements set forth in the relevant FFO. Investment Review Committee review. After the technical and project analysis reviews, members of the Investment Review Committee (IRC) prioritize competitive applications that merit consideration for an award. Each office maintains an IRC that must include regional counsel, a specialist in environmental issues, and a representative from EDA headquarters. The final IRC voting panel consists of at least four voting members, excluding the regional counsel and Regional Director. According to EDA’s 2012 operations manual, IRC recommendations must be developed with an aim of ensuring the balance of EDA’s grant portfolio both in terms of geography and investment type, and recommendations must be balanced against the amount of funding each office is allotted for Public Works and EAA grants. Grant officer’s review. Each office’s regional director serves as the grant officer and is responsible for reviewing the IRC recommendations and making final award decisions. EDA’s 2012 operations manual requires grant officers to document in writing the reasons behind decisions that disagree with the IRC’s recommendation. EDA staff also apply three types of criteria during review phases, as outlined by the EDAP FFO: Economic distress criteria. During the technical review phase, regional office staff apply economic distress criteria established by PWEDA to determine whether applications meet regulatory eligibility requirements. Evaluation criteria. During the project analysis review, regional office staff assess each application’s responsiveness to evaluation criteria set forth in the relevant FFO. Selection criteria. The IRC uses selection criteria to select the pool of competitive applications from which grants will be recommended for an award. Table 2 describes these criteria in detail. According to EDA headquarters, the criteria used to review applications for Public Works and EAA grant funds have generally reflected EDA’s investment priorities since the agency was established in 1965 and, thus, have not changed. However, officials said that the process for applying the criteria may change from time to time based upon, for example, a suggestion from Commerce’s Office of General Counsel or a regional office. The 2011 EDAP FFO described two grant review procedures that were not part of EDA’s 2007 EDAP FFO. Specifically, in fiscal year 2007, applications were only required to meet one out of five evaluation criteria, and EDA staff were not required to prioritize or rank competitive applications to help inform the grant officer’s award decision. However, in fiscal year 2011, EDA policy required staff to apply specific weights, as shown in table 2, to five categories of evaluation criteria: (1) national strategic priorities; (2) economically distressed and underserved communities; (3) return on investment; (4) collaborative regional innovation; and (5) public/private partnerships. Next, staff were to categorize each grant application as “not competitive,” “competitive,” or “highly competitive” based upon the merit review. IRC panels were then required to make a ranked list of recommendations from the pool of “competitive” and “highly competitive” applications. The ranked list of applications would be considered by the grant officer for final grant award decisions. Table 2 illustrates the criteria and procedures used to assess Public Works and EAA grants in fiscal years 2007 and 2011. EDA has a history of inconsistently documenting the results of committee meetings in which proposed projects are recommended for funding. In December 2000, the Commerce Office of Inspector General (OIG) reported that it found inconsistencies in how EDA’s regional offices documented the project review committee (now referred to as the IRC) process and recommended that EDA keep better documentation of actions by these committees. We also found inconsistencies in how EDA’s regional offices documented the results of IRC meetings, based on our review of a nongeneralizable sample of the IRC meeting documentation for 64 Public Works and EAA projects EDA funded in fiscal years 2007 and 2011. For example, some IRC meeting documentation we reviewed provided limited information on the proposed projects and included only a discussion of the number of jobs and the amount of private investment that the project would generate. In contrast, we found other instances of IRC meeting documentation that provided detailed information on proposed projects reviewed, including positives and negatives associated with the project and recommendations to improve the project. EDA officials told us that the agency had policies and procedures in place for its regional offices to follow for awarding grants in fiscal years 2007 and 2011, but that copies of the 2007 policies and procedures could not be located and that the same policies and procedures effective in fiscal year 2010 were also applicable in 2011. Our review of EDA’s 2010 policies and procedures found that there was no requirement that regional offices document IRC meeting discussions in a consistent manner. Similarly, EDA officials said that, prior to fiscal year 2012, discussions were not consistently documented. In March 2012, EDA implemented new procedures in response to the Commerce OIG’s December 2000 recommendations. According to the new procedures, regional offices were required to complete an IRC record for each proposed project reviewed using a standard template (see fig. 1). EDA guidance notes that the IRC should address several items using the template, including the (1) pros and cons of each project; (2) environmental/legal issues, if any; (3) project’s fit with the agency’s investment priorities; (4) notes on the reasonableness of reported outcome data on jobs and private investment; and (5) recommendations for further action. Despite the new guidance and procedures, we found that EDA regional offices did not consistently complete the required IRC template in fiscal year 2012. We reviewed a random and generalizable sample of IRC record templates for 74 Public Works and EAA projects that EDA regional offices funded in fiscal year 2012, and we estimate that the regional offices filled out the template in its entirety for only 46 percent of all projects funded that year. We found that one of EDA’s regional offices did not complete an IRC record template for any of the 20 projects we reviewed from this office. In addition, we estimate that 34 percent of the projects across all six regions did not include a discussion of pros and cons as required by EDA’s new procedures and 35 percent did not indicate the investment priority or priorities the projects were designed to address. Further, EDA has yet to implement a means of monitoring the offices’ use of the IRC template or assessing its effectiveness. Standards for Internal Control in the Federal Government requires that all transactions and other significant events be clearly documented and the documentation be readily available for examination. Such documentation provides an entity with reasonable assurance that its operations are conducted effectively and efficiently and in compliance with applicable laws and regulations. Until EDA ensures that its regional offices consistently and fully complete an IRC record template for all proposed projects considered for funding, EDA will not have adequate assurance that its funding decisions are consistent and transparent. Based on our analysis of the economic distress characteristics of the counties where EDA funded projects under its Public Works and EAA programs in fiscal years 2007 and 2011, we found that counties where EDA funded projects generally had lower per capita income and higher unemployment rates than national and state averages. Furthermore, some projects that EDA funded under the Public Works and EAA programs in fiscal years 2007 and 2011 had a special need, as defined by EDA. In addition, we found that for fiscal years 2007 and 2011, more than half of the projects EDA funded were located in counties that were part of nonrural areas, or areas with an urban center of more than 50,000 people. Based on our methodology and data sources, counties where EDA funded projects in fiscal years 2007 and 2011 under its Public Works and EAA programs generally had lower per capita income and higher unemployment rates compared with national and state averages. As previously mentioned, we did not use the same procedures that EDA staff generally use to calculate per capita income and unemployment, and our findings are not intended to replicate EDA’s eligibility determinations. As figure 2 illustrates, in fiscal years 2007 and 2011, our analysis showed that EDA funded a number of projects in counties that had a per capita income at or less than the national average, particularly under the Public Works program. For example, 121 of the 135 projects (90 percent) EDA funded under the Public Works program in fiscal year 2007 and 67 of the 80 projects (84 percent) in fiscal year 2011 were located in counties with per capita income at or less than the national average. Under the EAA program, 89 of the 116 projects (77 percent) EDA funded in fiscal year 2007 and 93 of the 150 projects (62 percent) it funded in fiscal year 2011 were located in counties that had a per capita income at or less than the national average. EDA also funded a number of projects in counties that had a per capita income at or less than the state average, as figure 3 shows. Specifically, under the Public Works program, 119 of the 135 projects (88 percent) EDA funded in fiscal year 2007 and 68 of the 80 projects (85 percent) EDA funded in fiscal year 2011 were located in counties with per capita income at or less than the state average. Under the EAA program, 86 of the 116 projects EDA funded (74 percent) in fiscal year 2007 and 87 of the 150 projects (58 percent) EDA funded in fiscal year 2011 were located in counties with per capita income at or less than the state average. Our analysis also found that the 24-month unemployment rates of EDA- funded project counties were typically higher than national unemployment rates. In fiscal year 2007, 89 (66 percent) of 135 Public Works project counties and 85 (71 percent) of 120 EAA project counties had 24-month unemployment rates that met or exceeded the national average (see fig. 4). In fiscal year 2011, fewer Public Works and EAA project counties (49 projects, or 60 percent and 76 projects, or 50 percent, respectively) had unemployment rates meeting or exceeding the national average. Our comparisons of the unemployment rate in EDA-funded counties to the state unemployment rate produced largely the same results as the national comparisons. Specifically, EDA funded 86 projects (64 percent) in fiscal year 2007 and 49 projects (60 percent) in fiscal year 2011 under the Public Works program in counties that met or exceeded the state unemployment average. In addition, EDA funded 70 projects (58 percent) in fiscal year 2007 and 79 projects (52 percent) in fiscal year 2011 under the EAA program in counties that met or exceeded the state unemployment average (see fig. 5). Our analysis of EDA’s data showed that EDA funded some projects located in areas with a special need (as defined by PWEDA and EDA regulations) under the Public Works and EAA programs in fiscal years 2007 and 2011 (see fig. 6). As previously discussed, EDA can determine a project to be eligible for funding based on a special need arising from actual or threatened severe unemployment or economic adjustment problems resulting from severe short-term or long-term changes in economic conditions. EDA funded a higher number of projects with a special need under its EAA program compared with its Public Works program in both fiscal years 2007 and 2011. Figure 6 also shows that, under both programs and in both fiscal years, firm closure/restructuring was the most commonly met special need criterion. In addition, EDA funded several projects under its EAA program that met the natural resource depletion special need criterion in fiscal year 2007 and the disaster or emergency special need criterion in fiscal year 2011. According to EDA officials, EDA received a large amount of supplemental no-year funding (additional funding that is available until expended and not tied to a particular fiscal year) in fiscal years 2008 and 2009 that could be used for disaster-related activities. Therefore, many of the special need projects eligible based on disasters or emergencies and funded in fiscal year 2011 could have been identified in fiscal years 2008 or 2009. In addition, these officials stated that the economic conditions in fiscal year 2011 could have accounted for part of the increased number of special-need-eligible projects in that year because the national unemployment rate was higher than usual and the national per capita income level was lower than usual, making it more difficult for individual counties to meet the EDA thresholds for these measures. With the exception of the Public Works program in fiscal year 2007, at least half of the funded Public Works and EAA projects in fiscal years 2007 and 2011 were located in counties that were part of nonrural areas. According to the U.S. Department of Agriculture Economic Research Service’s Rural-Urban Continuum Codes, 52 percent of all EDA grants awarded in fiscal year 2007 and 67 percent of those awarded in fiscal year 2011 funded projects in nonrural areas. By program, 45 percent of Public Works grants awarded in fiscal year 2007 and 53 percent of those awarded in fiscal year 2011 funded projects in nonrural areas, while 61 percent of EAA grants awarded in fiscal year 2007 and 75 percent of those awarded in fiscal year 2011 funded projects in nonrural areas. By comparison, 85 percent of the U.S. population lived in such nonrural areas as of the 2010 Census. A small number of projects each year were located in completely rural counties (population less than 2,500). These projects represented 7 percent of combined Public Works and EAA projects in fiscal year 2007 and 2 percent of combined projects in fiscal year 2011. By comparison, about 1.5 percent of the U.S. population lived in completely rural areas as of the 2010 Census. Figures 7 and 8 show the population density of counties where projects were funded in fiscal years 2007 and 2011, respectively. Based on our analysis of EDA-funded projects, EDA provided funding for various types of economic-development-related projects under its Public Works and EAA programs in fiscal years 2007 and 2011. As figure 9 shows, EDA most often funded projects in two categories under its Public Works program: infrastructure (projects that involve, among other things, constructing and repairing various modes of transportation; constructing and repairing water, sewer, gas, and electrical systems; and developing telecommunications and broadband infrastructure) and commercial and industrial (projects that involve the design, construction, demolition, or renovation of commercial buildings and industrial and business parks, including infrastructure to support the parks and financial support to existing businesses). Examples of Public Works projects from these categories that were selected from competitive applications include a grant of about $2.0 million for improving water and wastewater systems (infrastructure) in Northampton County, North Carolina, which, according to our analysis, had per capita income of 72 percent of the national average and an unemployment rate 2 percentage points higher than the national average, and a grant of about $1.2 million to expand an industrial park (commercial and industrial) in Beltrami County, Minnesota, which had per capita income of 72 percent of the national average. Figure 9 also shows that EDA most often funded projects in three categories under its EAA program: infrastructure (projects that involve, among other things, constructing and repairing various modes of transportation; constructing and repairing water, sewer, gas, and electrical systems; and developing telecommunications and broadband infrastructure). business development (projects that support entrepreneurial efforts, help businesses get started, and promote the development of new markets for existing products); and plans and research (planning and strategy development efforts for job creation and retention and projects that support research of practices, principles, and innovations that foster effective economic development strategies). Examples of EAA projects from these categories that were selected from competitive applications include grants of about $150,000 for entrepreneurial training (business development) in Tulare County, California, which, according to our analysis, had per capita income of 68 percent of the national average; about $200,000 for the development of an oil spill recovery plan (plans and research) in Lafayette County, Louisiana, which was awarded under the special need economic distress eligibility criterion; and about $2.5 million for a road reconstruction project (infrastructure) in Washington County, Rhode Island, which was also awarded under the special need economic distress eligibility criterion. While EDA took steps in fiscal year 2012 to address long-standing issues with its documentation of IRC decisions, we found inconsistencies in regional offices’ use of the new template, including offices that did not include a discussion of the pros and cons associated with the projects being considered and one office that did not use the IRC template to document any of the discussions we sampled from its fiscal year 2012 records. Federal internal control standards require clear and accessible documentation of all program transactions and other significant events. Because EDA officials have not ensured that its regional offices fully and consistently document their IRC discussions in the template, EDA may not have adequate assurance that its funding decisions are consistent and transparent. To increase transparency in the award selection process, the Secretary of Commerce should direct the Deputy Assistant Secretary for Economic Development to develop and implement procedures to ensure that EDA regional offices consistently complete the required Investment Review Committee record template for each proposed project considered for funding. We provided a draft of this report to the Department of Commerce (Commerce) for review and comment. Commerce’s Economic Development Administration (EDA) provided written comments, which are presented in appendix III. EDA agreed with our recommendation and requested that we provide additional clarification on our methodology for analyzing the economic distress characteristics and distribution of the economic development grant funds sampled in our study. EDA agreed with our recommendation to develop and implement procedures to ensure that regional offices consistently complete the required Investment Review Committee (IRC) template. Further, the agency noted that it plans to implement updated grant procedures and operations manuals in fiscal year 2014. The agency stated that these manuals should more clearly delineate requirements for proper documentation of the IRC meetings and other grant review requirements. The agency also stated that it plans to provide training to EDA regional staff on both new manuals. EDA officials also commented on our use of different data sources than EDA staff are required to use to determine the potential eligibility of funded projects. We note throughout the report that we did not use the same procedures that EDA staff generally use to calculate per capita income and unemployment, and that our findings are not intended to replicate EDA’s eligibility determinations. We added language to clarify the difference between the American Community Survey’s and Bureau of Economic Analysis’ measures of per capita income. In addition, we deleted references to statutory thresholds to further distinguish our analysis from a compliance review. EDA officials also commented on our use of the Department of Agriculture’s (USDA) Rural-Urban Continuum Codes for rural and nonrural classifications, compared to EDA’s use of the Census definitions of these terms. We used Rural-Urban Continuum Codes to describe the population densities of the counties that received grants because, together, these nine codes provide a more nuanced picture of EDA’s grant distribution across counties, while the Census classifies areas as either rural or urban. We agree that using the Census definitions of rural and urban would have provided a different perspective on the distribution of EDA grants to these areas, and we have clarified EDA’s methods and our rationale for using USDA’s classification system in the report. Finally, EDA officials commented on our finding that the agency funded more projects under the special need criterion in fiscal year 2011 than in fiscal year 2007, and commented that this was largely because the national unemployment rate was high during the recession at the time, making it more difficult for communities to meet the EDA thresholds for high unemployment or low per capita income. We have included these observations in the report. We are sending copies of this report to appropriate congressional committees and to the Secretary of the Department of Commerce. In addition, this letter will be made available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9345 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this letter. GAO staff who made key contributions to this report are listed in appendix IV. This report examines, for the Economic Development Administration’s (EDA) Public Works and Economic Development (Public Works) and Economic Adjustment Assistance (EAA) programs, (1) the extent to which EDA documented grant selection decisions; (2) the indicators of economic distress for counties with EDA-funded projects and how these funds have been distributed to rural and nonrural areas; and (3) the types of projects that have been funded by these programs. EDA awarded more than 1,500 grants under the Public Works and EAA grant programs in fiscal years 2006 through 2012 (the period of interest in the congressional mandate). For most of our review, we focused on fiscal year 2007 because EDA awarded the largest number of grants in this year compared to the rest of the 7-year period and on fiscal year 2011 because it was the most recent year during the period in which we considered data to be reliable. We did not focus on fiscal year 2012 data because some of the data from that year were not considered reliable due to an information technology event that shut down EDA’s grant management system for several months. However, to assess the extent to which EDA staff followed documentation requirements that the agency introduced in fiscal year 2012, we used a list of 204 projects from that year provided by EDA to randomly select a generalizable sample. We found this 2012 data sufficiently reliable for the purpose of drawing a sample (see full description of this sample below). To determine which fiscal year 2007 and 2011 EDA projects to review, we obtained extracts from EDA’s Operations Planning and Control System, its electronic grants management system. These data extracts contained characteristics of each funded project for fiscal years 2007 and 2011, including the project number; relevant program; applicant name; project description; project state; project county; amount of EDA funding; total project funding; rural classification of the project location; whether the project was eligible for funding based on per capita income, unemployment, a special need, or other criteria; the maximum percentage of program costs the project was eligible to receive from EDA; and an open-ended field for a description of the geographic area. The original data that EDA provided included 751 total projects from fiscal year 2007 and 711 total projects from fiscal year 2011. These projects reflected all of EDA’s grant programs. We removed all projects except those funded under the Public Works and EAA programs, leaving 267 and 237 projects, respectively, from fiscal years 2007 and 2011. We excluded a small number of projects from all or some of our analyses as described below. To describe the extent to which EDA documented grant selection decisions, we selected a nonprobability sample of 72 grant awards—36 from fiscal year 2007 and 36 from fiscal year 2011—across EDA’s six regions. We received and examined 64 of the 72 requested records (33 from fiscal year 2007 and 31 from fiscal year 2011) of EDA grant review meeting minutes associated with these projects to summarize factors associated with funding decisions. The results of our analyses of the minutes from fiscal years 2007 and 2011 cannot be generalized to all EDA grant awards in these two or any other years. In addition, we reviewed a random and generalizable sample of 74 grant awards from fiscal year 2012 to assess the extent to which EDA staff followed newly required procedures for documenting grant review meetings that year. To select the probability sample, we started with a list of all 224 Public Works and EAA-funded projects from fiscal year 2012 and removed 20 projects that were not subject to EDA’s standard grant review process. From the list of 204 projects, we randomly selected and requested grant review records for 76 projects, and we received 74 records. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. Our analysis of the fiscal year 2012 minutes provides a generalizable perspective (with a margin of error of plus or minus 10 percentage points) on the extent to which EDA staff followed the newly required documentation procedures in fiscal year 2012. As noted earlier, there were concerns over the reliability of fiscal year 2012 data. We created the sample frame out of the 2012 data extract that EDA provided. Later, we discovered that EDA listed 7 projects in its fiscal year 2012 annual report that were not included in the data extract and included 2 projects in the data extract that were not in the annual report. However, after discussion with the agency, we have a reasonable level of assurance that the fiscal year 2012 data that we used for our sampling purposes were sufficiently reliable. We also reviewed operational guidance that identified the eligibility and evaluation criteria for EDA staff to apply in determining projects to fund and conducted structured interviews with the six regional offices (Atlanta, Austin, Chicago, Denver, Philadelphia, and Seattle) to enable comparison of regional grant review and documentation practices. To describe the distress characteristics for counties with EDA-funded projects in fiscal years 2007 and 2011, we determined the 12-month per capita income and 24-month unemployment rate at the county level for each funded grant from fiscal years 2007 and 2011 (based on the EDA- defined date that the project record was created) using Bureau of Economic Analysis (BEA) and Bureau of Labor Statistics (BLS) data, respectively. We used the same data sources to calculate the national and state per capita income and unemployment rates for the same periods for each funded grant and compared them to the county level data. We chose the 12-month and 24-month periods because EDA’s authorizing statute requires EDA staff to obtain “the most recent data available” for per capita income and the most recent 24-month period for which data are available for unemployment data. To determine the appropriate time period for the data, we referred to the date that the project record was created in EDA’s electronic grants management system and used the BEA and BLS data from the previous 1 or 2 calendar years, respectively. We did not use the same procedures that EDA staff generally use to calculate per capita income and unemployment, and our findings are not intended to replicate EDA’s eligibility determinations. Some EAA project counties did not have relevant per capita income or unemployment data from the BEA or the BLS, respectively. In these cases, we dropped the projects from our analysis of per capita income or unemployment, as appropriate. Specifically, for 2007, we excluded 1 Public Works project and 4 EAA projects from the PCI analysis and 3 Public Works projects and 2 EAA projects from the unemployment analysis. For 2011, we excluded 2 Public Works projects and 3 EAA projects from the PCI analysis and 1 Public Works project and 1 EAA project from the unemployment analysis. These exclusions resulted in final counts of 135 Public Works projects and 116 EAA projects for the fiscal year 2007 per capita income analysis, 80 Public Works projects and 150 EAA projects for the fiscal year 2011 per capita analysis, 135 Public Works projects and 120 EAA projects for the fiscal year 2007 unemployment analysis, and 81 Public Works projects and 152 EAA projects for the fiscal year 2011 unemployment analysis. We also used EDA’s data to identify and describe Public Works and EAA projects that were funded based upon a special need in fiscal years 2007 and 2011. To describe the distribution of EDA Public Works and EAA funds among rural and nonrural areas in fiscal years 2007 and 2011, we used the U.S. Department of Agriculture Economic Research Service’s (ERS) Rural- Urban Continuum Codes to identify the population density at the county level for each funded grant. These codes range from 1 (counties in metropolitan areas of 1 million or more people) to 9 (counties that are completely rural or less than 2,500 people, not adjacent to a metropolitan area). ERS regards counties falling into codes 1 through 3 as metropolitan (“nonrural”) and those with codes 4 through 9 as nonmetropolitan (“rural”). We determined the proportion of grants awarded in nonrural versus rural areas in accordance with these definitions. We excluded one Public Works project from the population density analysis for each of the two fiscal years because Rural-Urban Continuum Codes were not available for the relevant counties. In fiscal years 2007 and 2011, there were 28 and 15 EDA projects, respectively, that EDA described as serving multiple counties. Three of the fiscal year 2007 multicounty projects only listed one county, so we treated these as single county projects. We completely removed 7 of the multicounty projects from our fiscal year 2007 sample and 2 from our fiscal year 2011 sample because the data extracts did not include the names of the relevant counties, which were necessary to obtain per capita income, unemployment, and rural classification data. For the multicounty projects for which the individual county names were available, we calculated a weight for each county by dividing its population for the most recent year for which data were available (using Census Bureau population estimates) by the total population for all counties in the project. We applied the population weight to each county’s 12-month per capita income or 24-month unemployment rate to arrive at an average per capita income or unemployment rate for the entire project. With regard to the Rural-Urban Continuum Codes for the multicounty projects, we determined that a weighted average would not be appropriate to apply to categorical data. Instead, we applied decision rules that resulted in the exclusion of 18 projects (10 projects from fiscal year 2007 and 8 projects from fiscal year 2011) from our rural classification analysis and assigned one Rural-Urban Continuum Code to each of the remaining multicounty projects. Specifically, for each unique project serving two or more counties, we applied the following decision rules. First, if all of the counties in a multicounty project had the same code, we assigned that code to the entire project. Second, if there were only 2 codes and they were contiguous (for example, all 1s and 2s) we assigned the code that was most frequent to the entire project. Third, when there were 2 or more noncontiguous codes, if one code occurred two-thirds of the time or more, we assigned that code to the entire project. Finally, if none of the first three rules applied, we excluded the project from the rural classification analysis. To describe the types of projects that EDA funded through its Public Works and EAA programs in fiscal years 2007 and 2011, we categorized EDA’s descriptions of funded projects using GAO-defined project categories. Specifically, in a prior GAO report, we identified nine project categories for the Public Works and EAA programs, among other federal economic development programs. One staff member independently categorized the fiscal year 2007 and 2011 projects, another staff member reviewed the categorizations, and they addressed any discrepancies with the input of a third-party reviewer. After reviewing all project descriptions, we condensed the original nine categories into six major categories, as well as an “all other” category. (See app. II for a description of project categories.) For projects for which the EDA project descriptions were unclear, we conducted Internet research and obtained EDA input on 31 projects to complete our categorizations. We assessed the reliability of the various electronic data sources used for this report, including project data from EDA’s electronic grants management database; county, state, and national per capita income and unemployment data from BEA and BLS; county population estimates from the Census Bureau; and Rural-Urban Continuum Codes from ERS. To assess reliability, we interviewed EDA officials knowledgeable about the electronic grants management database, reviewed the data dictionary, and conducted electronic testing of the data extracts against the relevant fields in the full database. We reviewed documentation from BEA, BLS, the Census Bureau, and ERS about how they compile their data. We determined that these data sources were sufficiently reliable for the purpose of describing the poverty, unemployment, and rural classifications of EDA project locations. We conducted this performance audit from April 2013 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Project categories developed by GAO include the following: Business Development: This category captures projects that support entrepreneurial efforts, help businesses get started, facilitate job placement, and promote the development of new markets for existing products. Commercial and Industrial: This category captures projects that involve the design, construction, demolition, or renovation of commercial buildings and industrial parks, including the infrastructure designed to support those structures, as well as financial support to existing businesses. Infrastructure: This category captures projects that involve the development of infrastructure, including constructing and repairing various modes of transportation (e.g., airports, roads, rail, and harbors); and water, sewer, gas, and electricity systems; as well as projects that support telecommunications and broadband infrastructure. Plans and Research: This category captures planning and strategy development efforts for job creation and retention, and projects that support research of the practices, principles, and innovations that foster effective economic development strategies. Technical Assistance: This category captures projects that provide business management and technical services including assistance with technical issues such as disaster mitigation and recovery, and biotechnology. Training and Education: This category captures projects that provide training and education services, including the construction of facilities and equipment needed to provide these services. All Other Project Types: This category captures projects that do not fit appropriately into any of the other categories (e.g., performance awards, base grant adjustments, tourism, and enterprise development). In addition to the individual named above, Marshall Hamlett, Assistant Director; Cynthia Grant; Tiffani Humble; May Lee; John McGrail; Andrew Moore; Lisa Reynolds; Beverly Ross; Max Sawicky; Jennifer Schwartz; Jena Sinkfield; Ardith Spence; and Shana Wallace made key contributions to this report.
|
The Department of Commerce's EDA provides financial assistance through grants to rural and urban communities experiencing substantial and persistent economic distress. EDA grants are intended to leverage existing regional assets to support the implementation of economic development strategies that advance new ideas and creative approaches to promote economic prosperity in distressed communities. House Report 112-463 included a mandate that GAO review grants EDA made under its Public Works and EAA programs from fiscal years 2006 through 2012. This report discusses (1) the extent to which EDA documented its funding decisions (2) the levels of economic distress and population density of counties where EDA funded projects, and (3) the types of projects EDA funded. GAO reviewed EDA regulations and guidance; analyzed EDA project data from fiscal years 2007 (the year in the period covered by the mandate in which EDA awarded the most grants), 2011 (the most recent year with reliable data), and 2012 (the year in which EDA implemented new documentation procedures), as well as other federal data; and interviewed EDA staff. The Economic Development Administration (EDA) implemented a new procedure in fiscal year 2012 that requires its regional offices to complete a standard template to document the results of committee meetings in which proposed projects are discussed and potentially recommended for funding. However, GAO found that for the Public Works and Economic Development (Public Works) program and the Economic Adjustment Assistance (EAA) program—EDA's two largest grant assistance programs—EDA regional offices had not completed the template consistently. GAO estimated that only 46 percent of all projects recommended for funding in fiscal year 2012 under these programs were documented using the complete template. EDA has a history of inconsistent documentation: for example, in 2000 the Department of Commerce's Inspector General reported inconsistencies in how EDA's regional offices documented the project review process. Standards for internal control in the federal government require all transactions and significant events to be clearly documented and available for examination. Until EDA takes steps to ensure that all of its regional offices consistently and fully complete the standard template for all proposed projects considered for funding, EDA will not have adequate assurance that its funding decisions are consistent and transparent. GAO found that counties where EDA funded projects in fiscal years 2007 and 2011 under its Public Works and EAA programs generally had lower per capita income and higher unemployment rates than national and state averages. Furthermore, some projects that EDA funded under the Public Works and EAA programs in fiscal years 2007 and 2011 had an EDA-defined special need arising from actual or threatened severe unemployment or economic adjustment problems. In addition, GAO found that counties where EDA funded projects under Public Works and EAA were generally part of nonrural areas (areas with an urban center of more than 50,000 people). Specifically, in fiscal years 2007 and 2011, respectively, 52 percent and 67 percent of all of EDA's funded projects under the two programs were in nonrural areas. GAO found that various types of economic development projects received funding under Public Works and EAA in fiscal years 2007 and 2011. The most common types of projects funded under Public Works involved constructing or repairing infrastructure (such as water, sewer, gas, and electrical systems) or constructing or renovating commercial buildings and industrial and business parks. The most common types of projects funded under EAA involved helping businesses get started, planning and research to support job creation and retention, and constructing or repairing infrastructure. To increase transparency in the award selection process, GAO recommends that EDA develop and implement procedures to ensure that EDA regional offices consistently complete the required template for each proposed project considered for funding. EDA agreed with the recommendation and described its plans to address it.
|
NextGen is a multidecade, multiagency effort to transform the current ATC system to the next generation air transportation system by moving from relying largely on ground-based radars to using precision satellites; digital, networked communications; and an integrated weather system. Often characterized as “curb to curb,” NextGen involves every aspect of air transportation, from arrival at the airport to departure from the destination airport, and it is expected to increase the safety and enhance the capacity of the air transport system. JPDO was charged with coordinating the research activities of the federal partner agencies with the goal of developing a 20-year R&D agenda for NextGen. FAA will play the central role in implementing NextGen, since it will be responsible for acquiring, integrating, and operating the new ATC systems. Industry stakeholders will also play a key role in implementing NextGen because they are expected to develop, finance, and operate many of the new NextGen systems that will need to be installed in aircraft. FAA plans to spend roughly $5.4 billion from fiscal years 2009 through 2013 on NextGen development and capital costs. JPDO estimated that total federal spending for NextGen may range from $15 billion to $22 billion through 2025. The agency also noted that it expects system users to incur $14 billion to $20 billion in costs to equip themselves with the advanced avionics necessary to realize the full benefits of some NextGen technologies. JPDO’s authorizing legislation requires the office to create an R&D plan for the transition to NextGen. This requirement led JPDO to develop initial versions of the Concept of Operations, Enterprise Architecture, and IWP. The Concept of Operations is the fundamental planning document from which the other two documents flow. Version 2 of the Concept of Operations, issued in June 2007, describes how the NextGen system is envisioned to operate in 2025. Version 2 of the Enterprise Architecture, issued in July 2007, is a technical description of the NextGen system, akin to blueprints for a building. The Enterprise Architecture provides a means for coordinating among the partner agencies and private sector manufacturers, aligning relevant R&D activities, and integrating equipment. Version 0.2 of IWP describes the integrated framework needed to transition to NextGen from the current system to the end state and will continually be refined and enhanced to reflect current priorities, budgets, and programs. It is JPDO’s plan for achieving NextGen. Version 1.0 of IWP is scheduled to be released at the end of this month. JPDO, FAA, and industry stakeholders have different perspectives on whether the views of industry and air traffic controllers have been adequately incorporated in NextGen planning. JPDO’s organizational structure and processes provide for industry representatives and, to a lesser extent, air traffic controllers to participate in NextGen planning, but nearly all the industry stakeholders we spoke with questioned both the meaningfulness of their involvement and the usefulness of the NextGen planning documents. Furthermore, active air traffic controllers maintain that they have not participated in NextGen development activities. According to FAA, however, their involvement will increase as NextGen efforts shift from planning to implementation. JPDO includes several organizations with industry participants, and industry representatives have reviewed and provided input to key JPDO planning documents. For example, JPDO’s NextGen Institute serves as a vehicle for incorporating the expertise of industry, state and local governments, and academia into the NextGen planning process. Additionally, the Institute Management Council, composed of top officials and representatives from the aviation community, including air traffic controllers, oversees the policies, recommendations, and products of the Institute and provides a means for advancing consensus positions on critical NextGen issues. JPDO also includes nine working groups, through which federal and private sector stakeholders come together to plan for and coordinate the development of NextGen technologies. JDPO created the working groups in early 2007 to replace its integrated product teams and, in part, to address concerns expressed by stakeholders about their participation. Unlike the previous teams, which were chaired by a representative from a federal agency, the working groups, which have the same members as the previous teams, are jointly led by government and industry officials. (See table 1.) JPDO expected the working groups to be more efficient and output- or product-focused than the integrated product teams. Currently, 265 industry representatives participate in JPDO. In addition, JPDO provided a draft of the Concept of Operations and IWP to industry representatives for review and comment. For example, version 0.2 of IWP was circulated to stakeholders and, according to a senior JPDO official, the office received about 1,100 stakeholder comments, which were addressed and incorporated in version 1.0 of the document. With these efforts, JPDO has sought to obtain participation from industry stakeholders and air traffic controllers in its planning activities, and we have reported that many stakeholders felt they did have an opportunity to participate. In fact, one industry stakeholder group told us that it worked closely with JPDO to help revise an early version of the Concept of Operations. However, other stakeholders said they frequently attended meetings, but were frustrated by a lack of tangible products being developed and a lack of progress being made during these meetings. Furthermore, 13 of 15 stakeholders who discussed the issue stated that they did not feel that their level of participation in either JPDO’s planning for or FAA’s implementation of NextGen allowed for sufficient or meaningful input toward decision making. Some stakeholders expressed concern that JPDO and FAA did not include their input in planning documents and other products. In their view, critical issues they raised are not being addressed or incorporated in NextGen plans. In particular, some stakeholders noted that planning documents were drafted by JPDO staff and then provided to them for review and comment. This approach, one industry stakeholder noted, did not take full advantage of stakeholders’ capabilities. Others were critical of FAA’s decision-making structure for implementing NextGen and indicated they felt that FAA and JPDO should lay out the broad plans and schedules for NextGen and then obtain industry input on the best ways to accomplish the technical changes for NextGen. Another stakeholder indicated it had the opportunity to provide input to FAA on decisions such as the deployment of ADS-B technology, but did not feel its input was considered by the agency. Still others felt that FAA provided sufficient briefings on NextGen activities, but allowed no opportunity for their input or comments. A number of stakeholders also expressed concerns about the usefulness of JPDO’s three planning documents and of FAA’s implementation plan for NextGen (a document previously known as the Operational Evolution Partnership and now called the NextGen Implementation Plan). Nineteen of 21 industry stakeholders who discussed the issue said that these planning documents lack the information that industry participants need for successful planning. Many of the stakeholders we interviewed said that while the planning documents provide a high-level view of NextGen benefits, they do not provide specific details such as a catalog of critical needs, clearly defined and prioritized intermediate objectives, and a structured plan for achieving tangible results. According to stakeholders who manufacture aviation equipment, the plans lack specific details to inform them about the types of technology they need to design for NextGen or to provide insights to market, build, and install systems that support NextGen. Some industry stakeholders further noted that the current planning does not identify all of the key research for NextGen, establish priorities for R&D, or show how to obtain those results. In addition, several stakeholders characterized the documents as long and confusing—qualities that detracted from their usefulness. We agree that the latest publicly available versions of these documents lack information that various stakeholders need. For example, the documents do not include key elements such as scenarios illustrating NextGen operations; a summary of NextGen’s operational impact on users and other stakeholders; and an analysis of the benefits, alternatives, and trade-offs that were considered for NextGen. Our review of the upcoming version of IWP confirmed that it is to have information that is lacking in the current document. According to JPDO and FAA officials, it includes schedule information that has been updated to reflect newly available information, coordination with FAA’s schedule and plans, and revisions in response to public comments received on the previous version. In addition, a senior JPDO official noted and we agree that these documents are not the appropriate place for some of the detailed information stakeholders would like and need, such as specific information on the types of technology stakeholders need to design or install. Active air traffic controllers are represented on JPDO’s Institute Management Council, and other controllers and aviation technicians participate in certain JPDO efforts. However, stakeholders from the National Air Traffic Controllers Association—an FAA employee union— have indicated that although the union participates in FAA meetings and briefings related to NextGen, it does so as a recipient of information rather than an equal party in the development of NextGen. Technicians in another FAA employee union—the Professional Aviation Safety Specialists—have indicated that they do not participate in NextGen planning or development activities. Although air traffic controllers and technicians will be responsible for a major part of the installation, operations, and maintenance of the systems that NextGen will comprise, our work has shown that these stakeholders have not fully participated in the development of NextGen. Insufficient participation on the part of these employees could delay the certification and integration of new systems and result in increased costs, as we have seen in previous ATC modernization efforts. FAA officials, however, note that both unions are represented on its NextGen Management Board, a decision-making body for resolving emerging NextGen implementation issues. Furthermore, FAA has indicated that air traffic controllers, pilots, and airline operations center personnel will be a part of the extended team that is directly involved in the planning and execution of a gradual rollout of NextGen technologies and procedures in a Florida demonstration. In addition, according to FAA, these stakeholders will continue to be heavily involved in NextGen throughout its life cycle through their participation on advisory committees such as RTCA, the Air Traffic Management Advisory Committee, the Performance-Based Operations Aviation Rulemaking Committee, and the Research, Engineering and Development Advisory Committee. FAA and JPDO have established mechanisms for obtaining stakeholder views. However, given the large number of NextGen stakeholders and the evolution of opportunities for participation in NextGen, we believe that stakeholders will continue to differ on how adequately their views have been incorporated in NextGen planning. Our work indicates that the current version of the IWP lacks critical information and is not sufficiently “user friendly” to be effectively used to oversee and manage NextGen activities. For instance, 19 of the 21 stakeholders who discussed the issue said that the planning documents did not provide specific details such as a catalog of critical needs, clearly defined and prioritized intermediate objectives, and a structured plan for achieving tangible results. However, the next version of the plan, to be released at the end of September, is to have further details and research priorities that should be useful for NextGen oversight. According to senior JPDO officials, this next version will identify the specific operational improvements and capabilities that NextGen will incorporate and will show what policies, research, and other activities are needed to enable those improvements and capabilities, when they are needed, and what entities are responsible for them. Moreover, this version includes schedule information that has been updated to reflect newly available information, coordination with FAA schedules and plans, and public comments received on the previous version, according to JPDO and FAA officials. Our review of the upcoming version—which is an automated, searchable, user-friendly database—verified that it will have the capability to track dates and identify programs that are behind schedule, making it useful, but not sufficient, for oversight. Senior JPDO officials expect subsequent versions of IWP to include cost information and more detail on which programs are responsible for completing particular actions. We believe that JPDO’s upcoming version of the work plan shows progress in providing needed details and making the document more useful than earlier versions. With cost information, subsequent versions of the plan should be even more useful for NextGen oversight. The research, development, and testing activities set out in the current IWP do not provide a sufficient basis for Congress to be confident that the goals of NextGen will be achieved. However, the enhanced information that is planned for inclusion in the upcoming version will provide a firmer basis for congressional confidence. The current plan can best be viewed as a necessary but not a sufficient step in the planning and early implementation of NextGen. However, additional issues that are not part of the current plan will have to be addressed to achieve NextGen goals, such as obtaining the necessary funding, establishing the infrastructure to support the scope of needed R&D, and filling the gap that may exist between basic research and the research needed to bring technologies far enough along for transfer to industry for further development. JPDO and FAA have determined that research gaps now exist because of cuts in NASA’s aeronautical research funding and NextGen’s expanded research requirements. In the past, NASA performed a significant portion of aeronautics R&D. However, NASA’s aeronautic research budget declined from about $959 million in fiscal year 2004 to $511 million in fiscal year 2008. While NASA still plans to focus some of its research on NextGen needs, the agency has moved toward a focus on fundamental research and away from developmental work and demonstration projects. As a result, in some cases, NASA’s research focuses on developing technologies to a lower—and therefore less readily adopted—maturity level than in the past. Budget requests for FAA have increased to help provide the needed R&D funding for NextGen. According to FAA, the agency will spend an estimated $740 million on NextGen-related R&D during fiscal years 2009 through 2013. The administration’s proposed budget for fiscal year 2009 requests $56.5 million for FAA R&D to support the integration and implementation of NextGen programs, a substantial increase over the $24.3 million authorized for fiscal year 2008. The actual and projected increase in FAA’s overall R&D funding reflects the expected increases in NextGen research funding. (See fig. 1.) In addition, increased funding for NextGen R&D is contained in proposed legislation to reauthorize FAA, although that legislation has not been enacted. If FAA is authorized to receive increased R&D funding for NextGen, some observers believe that the agency lacks the R&D infrastructure to adequately address the developmental research needed for NextGen. According to a draft report by the Research, Engineering and Development Advisory Committee, establishing the infrastructure within FAA to conduct the necessary R&D could delay the implementation of NextGen by 5 years. Unless an adequate R&D infrastructure is in place as funds become available, the implementation of NextGen could be delayed. One critical area in which an R&D gap has been identified is the environmental impact of aviation. According to a JPDO analysis, environmental impacts will be the primary constraint on the capacity and flexibility of the national airspace system unless these impacts are managed and mitigated. FAA’s Continuous Lower Energy, Emissions, and Noise (CLEEN) initiative, in which NASA would participate as an adviser, is intended to address the gap between NASA’s fundamental research in noise reduction and the need for near-term demonstrations of technology. This program would establish a research consortium of government, industry, and academic participants that would allow for the maturation of these technologies via demonstration projects. In proposed legislation reauthorizing FAA, $111 million for fiscal years 2008 through 2011 may be used for a new FAA program to reduce aviation noise and emissions. This program would, over the next 10 years, facilitate the development, maturation, and certification of improved airframe technologies. The CLEEN program would be a step toward further maturing emissions and noise reduction technologies, but experts agree that the proposed funding is insufficient to achieve needed emissions reductions. While acknowledging that CLEEN would help bridge the gap between NASA’s R&D and manufacturers’ eventual incorporation of technologies into aircraft designs, aeronautics industry representatives and experts we consulted said that the program’s funding levels may not be sufficient to attain the goals specified in the proposal. According to these experts, the proposed funding levels would allow for the further development of one or possibly two projects. Moreover, in one expert’s view, the funding for these projects may be sufficient to develop the technology only to the level that achieves an emissions-reduction goal in testing, not to the level required for the technology to be incorporated into a new engine design. Although we believe that this level of funding is a step in the right direction, additional funds would permit the agency to “buy down” R&D risks—that is, the more projects that can be funded, the greater the chance that at least one of the projects will yield a product for the next stage of development. FAA recognizes the implications of the proposed funding structure for CLEEN and characterizes the program as a “pilot.” We are guardedly optimistic that the NextGen goals and timetable for quieter, cleaner, and more efficient air traffic operations can be achieved. The administration has requested increased funding for NextGen R&D and FAA and JPDO recognize the need to establish an R&D infrastructure and fill any gaps that may exist between basic research and the transfer to industry for further development. Prior to May 2008, when FAA restructured ATO, JPDO reported directly to both the Chief Operating Officer (COO) of ATO and the FAA Administrator. Figure 2 shows FAA’s management structure as of November 2007, with the shaded boxes showing offices with responsibilities for NextGen activities. We expressed concerns about this dual reporting status, suggesting that it might keep JPDO from interacting on an equal footing with ATO and the other partner federal agencies. We recognized that JPDO needed to counter the perception that it was a proxy for ATO and, as such, was not able to act as an “honest broker” between ATO and the partner federal agencies, but we also understood that JPDO must continue to work with ATO and its partner agencies in a partnership in which ATO is the lead implementer of NextGen. Therefore, we reported that it was important for JPDO to have some independence from ATO and pointed out that, to address this issue, the JPDO Director could report directly to the FAA Administrator. We observed that such a change could also lessen what some stakeholders perceived as unnecessary bureaucracy and red tape associated with decision making and other JPDO and NextGen processes. Since ATO was reorganized in May 2008, JPDO has been housed within the new NextGen and Operations Planning Office and reports through the Senior Vice President for NextGen and Operations Planning only to ATO’s COO. (See fig. 3.) Now that JPDO is no longer a separate, independent office within FAA and no longer reports directly to the FAA Administrator, its organizational position within FAA has declined. Nonetheless, we believe that it is too early to tell whether JPDO will be able to act as an “honest broker” between FAA and the other federal partner agencies. Currently, according to a senior JPDO official, JPDO’s partner agencies are cooperating with JPDO, indicating that the office is apparently maintaining its status as an honest broker. However, it is also too early to tell if ATO’s reorganization sufficiently addresses concerns that many industry stakeholders expressed about the adequacy of the previous organizational relationship between FAA and JPDO—when JPDO reported directly to both the COO and the Administrator—for the transition to NextGen. Proposed legislation reauthorizing FAA would address the earlier concern of stakeholders by designating the Director of JPDO as the Associate Administrator for the Next Generation Air Transportation System, appointed by and reporting directly to the Administrator. The proposed legislation would also address observations we have made about JPDO’s organizational placement within FAA. Finally, it is too early to tell if the reorganization of FAA’s management structure addresses concerns that stakeholders have expressed about the fragmentation of management responsibility for NextGen activities. Specifically, some industry stakeholders expressed frustration that a program as large and important as NextGen does not follow the industry practice of having one person authorized to make key decisions. They pointed out that although FAA’s COO is nominally in charge of FAA’s NextGen efforts, the COO must also manage the agency’s day-to-day air traffic operations and may therefore be unable to devote enough time and attention to managing NextGen. In addition, these stakeholders noted that many of NextGen’s capabilities span FAA operational units both within and outside ATO. The reorganization does not address concerns about this fragmentation, since other offices in ATO and FAA continue to have responsibility for parts of NextGen and the division of responsibility for NextGen efforts among them is not clear. A senior FAA official noted that ATO executives are knowledgeable and supportive of the reorganization, but that the agency could better communicate the changes to stakeholders outside of FAA. A focused outreach to industry stakeholders would help to get their buy-in and support of FAA’s efforts. To articulate a clear R&D program with defined and prioritized tasks, JPDO must continue to collaborate with its partner agencies—FAA, NASA, DOD, DHS, and Commerce—to identify and prioritize the R&D needed for NextGen. As it issues new versions of IWP, JPDO continues to update the R&D plans of the partner agencies. However, JPDO has not yet determined what NextGen R&D needs to be done first and at what cost to demonstrate and integrate NextGen technologies into the national airspace system. The next version of IWP, scheduled to be released later this month, is to identify the sequence of research activities that the partner agencies must complete before specific NextGen capabilities can be implemented. The plan should serve as a useful tool in prioritizing and tracking NextGen research. In addition, JPDO has worked with the Office of Management and Budget (OMB) to develop a process that allows OMB to identify NextGen-related research and acquisition projects across the partner agencies and consider NextGen as a unified, cross-agency program. Under this process, JPDO and its partner agencies can jointly present OMB with business cases for the partner agencies’ NextGen-related efforts, and these business cases can be used as inputs to funding decisions for NextGen research and acquisitions across the agencies. In addition, JPDO needs to continue to leverage the R&D programs of the partner agencies, which will conduct and define the research. For example, JPDO monitors NASA’s and FAA’s efforts to coordinate their research. NASA and FAA have developed a strategy to identify, conduct, and transfer to FAA the R&D needed for NextGen. The strategy establishes four “research transition teams” that align with JPDO’s planning framework and outlines how the two agencies will jointly develop research requirements—FAA will provide user requirements and NASA will conduct the research and provide an understanding of the engineering rationale for design decisions. In addition, the strategy calls for clearly defining metrics for evaluating the research. According to JPDO, as of August 2008, four teams had been established and have conducted initial meetings. JPDO has begun to move from proposing research to articulating a defined and prioritized R&D program. In addition, JPDO, FAA, and NASA have established mechanisms, such as research transition teams, to define and prioritize R&D. We believe, however, that it is still too early to assess the adequacy of these efforts. Version 1.0 of IWP, scheduled to be released later this month, will provide a baseline for measuring NextGen progress. Congress can use the information contained in the plan to help evaluate whether the actions needed to achieve NextGen are on schedule and whether the specific operational improvements and capabilities that will make up NextGen are being accomplished. Specifically, subsequent versions of the plan will allow the development of metrics to show progress, by agency, in (1) achieving key activities and deploying technology, (2) issuing policies and guidance, and (3) prioritizing resources. Furthermore, subsequent versions of IWP are expected to include cost information that decision makers can use to help understand the rationale for budget requests, monitor costs, and improve future cost estimates for acquisitions. This information will be helpful to decision makers when budget constraints do not allow all system acquisitions to be fully funded at planned and approved levels and they must decide which programs to fund and which to cut or delay according to their priorities. At this point, Mr. Chairman, I would like to briefly discuss two additional issues that present challenges to realizing the full potential of NextGen. The first, an infrastructure challenge, is to implement NextGen plans for a new configuration of ATC facilities and enhanced runway capacity. The second, a human capital challenge, is to ensure that FAA staff have the knowledge and skills needed to implement NextGen. To fully realize NextGen’s capabilities, a new configuration of ATC facilities and enhanced runway capacity will be required to go along with new technologies and procedures. According to a senior ATO official, the agency plans to report on the cost implications of reconfiguring its facilities in 2009. However, FAA has no comprehensive plan for reconfiguring its facilities. Until the cost analysis is completed and a plan for facilities reconfiguration has been developed, the configurations needed for NextGen cannot be implemented and potential savings that could help offset the cost of NextGen will not be realized. Some FAA officials have said that planned facility maintenance and construction based on the current ATC system are significant cost drivers that could, without reconfiguration, significantly increase the cost of NextGen. Additionally, some of the capacity and efficiency enhancements expected from the implementation of NextGen maybe curtailed if the system’s infrastructure needs are not fully addressed. In the meantime, FAA faces an immediate task to maintain and repair existing facilities so that the current ATC system continues to operate safely and reliably. The agency is currently responsible for maintaining over 400 terminal facilities. While FAA has not assessed the physical condition of all of these facilities, the agency rated the average condition of 89 of them as “fair.” Based on its assessment of these 89 facilities, FAA estimated that a one-time cost to repair all 400 terminal facilities would range from $250 million to $350 million. Two FAA employee unions (NATCA and PASS) contend that many of the 400 facilities are deteriorating for lack of maintenance and that working conditions are unsafe because of leaking roofs, deteriorating walls and ceilings, and obsolete air-conditioning systems. According to FAA officials, while some of these facilities can accommodate NextGen’s new technologies and systems, many of them are not consistent with the configurations that will be needed under NextGen. Once FAA develops and implements a facility consolidation plan, the costs of facility repairs and maintenance may be reduced. In the meantime, FAA will have to manage its budgetary resources so that it can maintain legacy systems and legacy infrastructure while configuring the national airspace system to accommodate NextGen technologies and operations. The transformation to NextGen will also depend on the ability of airports to handle greater capacity. While NextGen technologies and procedures will enhance this ability, new or expanded runways will likely be needed also to handle the expected increases in traffic. FAA has developed a rolling 10-year plan for capacity improvements at the nation’s 35 busiest airports, and some airports are building new runways. However, even with these planned runway improvements, FAA analyses indicate that 14 more airports will still need additional capacity. Moreover, without significant reductions in emissions and noise around some of the nation’s airports, efforts to expand their capacity could be stalled or the implementation of NextGen delayed. We believe that this is a significant issue that FAA and JPDO will have to address. To manage the implementation of NextGen, FAA will need staff with technical skills, such as systems engineering and contract management expertise. Because of the scope and complexity of the NextGen effort, the agency may not currently have the in-house expertise to manage the transition to NextGen without assistance. In November 2006, we recommended that FAA assess the technical and contract management skills FAA staff will need to define, implement, and integrate the numerous complex programs that will be involved in the transition to NextGen. In response to our recommendation, FAA contracted with the National Academy of Public Administration (NAPA) to determine the mix of skills and number of skilled persons, such as technical personnel and program managers, needed to implement NextGen and to compare those requirements with FAA’s current staff resources. NAPA expects to complete its assessment in September 2008. We believe this is a reasonable approach that should help FAA begin to address this issue, recognizing that once the right skills have been identified, it may take considerable time to select, hire, and integrate what FAA estimates could be 150 to 200 more staff. This situation could contribute to delaying the integration of new technologies and the transformation of the national airspace system. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Committee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
To prepare for forecasted air traffic growth, the Federal Aviation Administration (FAA), in partnership with other federal agencies and the aviation industry, is planning and implementing the Next Generation Air Transportation System (NextGen), a new, satellite-based air traffic management system that is expected to increase the safety and enhance the capacity of the air transport system. NextGen will replace the current radar-based air traffic control (ATC) system. Within FAA, the Air Traffic Organization (ATO) is responsible for implementing the transition to NextGen, and ATO's Joint Planning and Development Office (JPDO) is coordinating efforts to plan for this transition, including developing a 20-year research and development (R&D) agenda for NextGen. JPDO has drafted three basic planning documents for NextGen--a Concept of Operations, an Enterprise Architecture, and an Integrated Work Plan (IWP). This testimony responds to six questions about NextGen and JPDO raised by the House Committee on Science and Technology, and addresses two related challenges identified by GAO. The statement is based on recent related GAO reports and testimonies, including a report issued today that reflects GAO's analysis of interviews with 25 key NextGen stakeholders about progress and challenges involved in the transition to NextGen. Have the Views of Industry and Air Traffic Controllers Been Adequately Incorporated in NextGen Planning Documents? FAA and JPDO have established mechanisms for obtaining stakeholder views. However, given the large number of NextGen stakeholders and the evolution of opportunities for participation in NextGen, we believe that stakeholders will continue to differ on how adequately their views have been incorporated in NextGen planning. Is the Current Version of IWP Sufficiently Detailed for Effective Use in Overseeing and Managing NextGen? No. The current version lacks some needed information, but the next version, to be released this month, is to contain more detail, including schedule information, and is automated and searchable, making it more user friendly and useful for oversight. How Confident Should Congress Be that IWP Will Provide a Sufficient Basis for Achieving NextGen's Goals? The current plan does not provide a sufficient basis for Congress to be confident. The upcoming version will provide a firmer basis for confidence, but additional R&D issues that are not part of the plan will have to be addressed, including technology transfer issues. Can JPDO Continue to Be Viewed as an "Honest Broker" in Light of FAA's Recent Restructuring? The restructuring made JPDO a component of ATO rather than an independent office, but other federal agencies are reportedly still cooperating with JPDO, suggesting that they continue to view it as an honest broker. However, it is too early to tell if the restructuring addresses stakeholders' concerns about the fragmentation of management responsibility for NextGen activities. What Needs to Be Done to Move JPDO from Proposing R&D to Articulating a Clear R&D Program with Defined and Prioritized Tasks? The move is underway. JPDO needs to continue collaborating with its partner agencies to identify and prioritize R&D and leverage their R&D programs. It is too soon to assess the results of steps JPDO and the partner agencies have taken thus far. What Metrics Should Congress Use to Evaluate the Progress of NextGen? Schedule information in the upcoming version of IWP and cost information in the subsequent version will help provide Congress with metrics for evaluating NextGen's progress. Additional Infrastructure and Human Capital Challenges Identified by GAO. NextGen's implementation further depends on FAA's reconfiguring and maintaining its ATC facilities, expanding runways, and hiring staff with the engineering and contract management skills needed to provide oversight.
|
Children enter foster care when they have been removed from their parents or guardians for reasons such as abuse or neglect, and placed under the responsibility of a state child welfare agency. The agency generally places the child in the home of a relative, with unrelated foster parents, or in a group home or residential treatment center, depending on the child’s needs. Child welfare caseworkers at the agency are typically responsible for coordinating placement and needed support services for these children, including those for mental health. If a child is determined to be in need of mental health services, the caseworker is generally responsible for arranging such services to be provided by primary care physicians, child psychiatrists, or other mental health providers. State courts, typically juvenile or family courts, are also frequently involved in decisions regarding a child’s removal, placement, and services. Most children in foster care are eligible for Medicaid, and those enrolled may receive physical and mental health services through a variety of service delivery and provider payment systems, such as fee-for-service and managed care. In the traditional fee-for-service delivery system, the state Medicaid agency manages the program and reimburses physicians directly and on a retrospective basis for each health service delivered. Under a managed care model, states contract with one or more managed care organizations and prospectively pay the organizations a fixed monthly fee per patient to provide or arrange for defined health services, which may include mental health services and prescription medications. These organizations, in turn, pay physicians. States are primarily responsible for administering their child welfare and foster care programs, consistent with applicable federal laws and regulations, which include some requirements that relate to ensuring the well-being of children served by these programs. For example, title IV-E of the Social Security Act authorizes federal funding to states to help cover the costs of operating their foster care and certain other programs. In addition, title IV-B of the Social Security Act authorizes federal funds to support state child welfare programs and services. Both of these programs establish various requirements that participating states must comply with in order to receive the federal funding. The Fostering Connections to Success and Increasing Adoptions Act of 2008 amended title IV-B to add a requirement that states develop a plan for the ongoing oversight and coordination of health care services for children in foster care, including mental health and oversight of prescription medications. The Child and Family Services Improvement and Innovation Act amended this provision to require that these plans include protocols for the appropriate use and monitoring of psychotropic medications. HHS’s ACF is responsible for monitoring state implementation of title IV-E and IV-B programs. For example, ACF conducts reviews of state child welfare and foster care programs every 5 years to ensure conformity with requirements under these federal programs. ACF also monitors state compliance with title IV-B plan requirements, including the health care oversight and coordination plan, through its review of states’ five-year Child and Family Services Plans and Annual Progress and Services Reports. In addition, ACF’s mission is to promote the economic and social well-being of families, children, individuals, and communities through funding, guidance, training, and technical assistance. Under the Medicaid program, states are required to provide eligible children under age 21 with coverage for certain health services, which may include mental health services, through the Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) benefit. Specifically, federal law requires coverage of periodic screening services, including a comprehensive health and developmental history of both physical and mental health development, a comprehensive physical exam, appropriate immunizations, laboratory tests, and health education. The EPSDT benefit also covers treatment services necessary to correct or ameliorate any identified physical or mental illnesses or conditions. HHS’s CMS oversees state Medicaid programs and provides federal matching funds for eligible services. On an annual basis, states are required to report to CMS information on their Drug Utilization Review programs, including prescribing patterns, cost savings generated by the programs, an assessment of the programs’ impact on quality of care, and program operations, including information on new innovative practices adopted by states. CMS includes these reports on its website. In addition, state mental health agencies are generally responsible for planning and operating state mental health systems, and play an important role in administering, funding, and providing treatments. These agencies may manage mental health-related federal grants and may work with other state agencies—such as state Medicaid agencies— to identify and treat mental health conditions. They may also contract directly with physicians to deliver treatments or may contract with county or city governments responsible for the delivery of treatments within their local areas. HHS’s SAMHSA engages in activities intended to help improve the behavioral health of children in foster care. Such efforts include grants that support the development of community-based services for children with mental health conditions and information sharing on psychotropic medication practices. Many stakeholders may be involved in ensuring appropriate mental health treatments for children in foster care (see fig. 1). Psychotropic medications can have significant benefits for those with mental health conditions by affecting brain activity associated with mental processes and behavior. However, they can also have side effects ranging from mild to serious, depending on the class and type of medication used. According to the American Academy of Child & Adolescent Psychiatry (AACAP), medications for attention deficit hyperactivity disorder (ADHD), such as amphetamines (e.g., Adderall) and methylphenidate (e.g., Ritalin and Concerta), can reduce symptoms such as hyperactivity in children as well as improve their attention and increase their ability to get along with others. These medications have been widely tested in children and are generally considered safe; however, ADHD medications have also been associated with side effects such as sleeplessness, loss of appetite, tics, agitation, hallucinations, liver problems, and suicidal thoughts. In addition, antidepressants, such as fluoxetine (e.g., Prozac) and sertraline (e.g., Zoloft) can be used to treat conditions such as depression and anxiety. However, possible adverse side effects include agitation, sleeplessness or drowsiness, and suicidal thoughts. The use of antipsychotics—one class of psychotropic medication—has been of particular concern. Antipsychotic medications, such as aripiprazole (e.g., Abilify) and risperidone (e.g., Risperdal), were developed to treat conditions such as bipolar disorder or schizophrenia. However, possible adverse side effects can be serious, including increased levels of cholesterol, rapid weight gain, and the development of diabetes or irreversible movement disorders. Mental health researchers and others have stated the need for further research on the safety, effectiveness, and long-term effects of antipsychotics for children. Psychosocial services are mental health treatments that generally involve therapy sessions with a mental health professional that are designed to reduce patients’ emotional or behavioral symptoms. Such therapies may be used instead of, or in combination with, psychotropic medications to treat children with mental health conditions. Several large, federally funded studies have demonstrated that treatment with a combination of a psychosocial therapy and a psychotropic medication can be more effective than either treatment alone for certain conditions. Further, psychosocial services shown to be effective in treating mental health conditions may be referred to as evidence-based therapies. While there is no standard definition of what constitutes “evidence-based,” some federal agencies and provider organizations, such as SAMHSA and AACAP, evaluate and compile information on available therapies. In response to concerns about psychotropic prescribing practices for children, especially involving those in foster care, AACAP has developed multiple resources to promote the appropriate and safe use of these medications. For physicians, AACAP issued best practice guidelines in 2009 that establish key activities before and after prescribing psychotropic medications to children (see fig. 2). Building on these guidelines for physicians, AACAP developed best practice guidelines for states in 2012 and 2015, with support and partial funding from SAMHSA, that establish practices for overseeing the use of psychotropic medications for children in foster care as well as other children in state custody. An overarching principle outlined in AACAP guidelines is that the use of psychotropic medications for these children should be part of a holistic and collaborative mental health treatment approach that recognizes (1) the biological, psychological, and social factors that may impact a child; (2) trauma-informed care principles that acknowledge the prevalence and impact of trauma, and a commitment to minimize its effects and avoid additional traumatization; and (3) child- serving agencies as part of a system of care for the child, and services that should be youth-guided, home and community-based, integrated across systems, data-driven, and outcome-oriented, among other things. HHS issued an Information Memorandum to states in April 2012 to provide guidance to states on complying with the statutory requirement to develop protocols for the appropriate use and monitoring of psychotropic medications. The memorandum cited our previous work that raised concerns about states’ efforts to oversee the use of psychotropic medications among children in foster care. This memorandum identified policy statements and guidelines from AACAP, the American Academy of Pediatrics, and the state of Texas, among others, and discussed consistent elements among these sets of guidelines. These elements, summarized below, include the need for state policies to contain provisions for: screening, assessment, and treatment planning mechanisms to identify children’s mental health and trauma-treatment needs; informed and shared decision-making and methods for ongoing communication among the physician, child, family, and other key stakeholders; effective medication monitoring; availability of mental health expertise and consultation; and mechanisms for sharing up-to-date information and educational materials related to mental health and trauma-related interventions, including psychotropic medications. In addition to issuing this memorandum to states, we reported in April 2014 that ACF had worked collaboratively with CMS and SAMHSA to provide technical assistance; facilitate information sharing; and emphasize the need for collaboration among state child welfare, Medicaid, and mental health officials in overseeing psychotropic medications from January 2012 through July 2013. Notably, these agencies cohosted a conference entitled “Because Minds Matter” in August 2012 that focused on collaborative medication monitoring as well as creating data systems to facilitate collaboration, among other things. According to ACF, CMS, and SAMHSA officials, the conference was an opportunity for states to talk and share practices, and representatives from 49 states attended. Officials we spoke with in the seven selected states told us they developed a variety of practices to better support appropriate mental health diagnoses and treatments for children in foster care in their states. These range from requiring initial mental health screenings to monitoring children after they are prescribed psychotropic medications. State officials in all seven of the selected states told us they require mental health screenings of children entering foster care, which is consistent with the guidelines on screenings identified in HHS guidance (see fig. 3). For example, Washington officials told us that staff in their child welfare screening program are expected to screen children entering foster care within thirty days using validated tools, such as a trauma-related screen for anxiety and post-traumatic stress disorder, which have been tested and found to draw consistent results for the same child across multiple screeners. In addition, the screeners ask children whether they are taking psychotropic medications or receiving health services, and how their symptoms are progressing. Screeners then provide a report to caseworkers, who can follow up to ensure the child receives the appropriate referrals and services. Washington child welfare officials said they also use screening data to analyze how many children with mental health needs are receiving services. Arizona requires that children entering foster care receive a mental health screening within 72 hours to identify and provide services for any immediate mental health needs and reduce the child’s stress and anxiety. Screenings are also to include provision of mental health services to each child’s new caregiver. These services include guidance on how to respond to the child’s immediate needs as the child transitions to foster care, information on mental health symptoms to watch for and report, assistance in responding to such symptoms, and provision of a contact in the mental health system. The child’s caseworker is to be provided with findings and recommendations for needed mental health services. Illinois officials said their child welfare agency requires all children in foster care to receive a comprehensive health assessment from a licensed social worker and the child’s caseworker within 55 days of entering the foster care system. According to these officials, this assessment should include a discussion of mental health issues and can prompt a referral for a psychiatric evaluation for the child. State officials in the seven selected states said they have a variety of guidelines and restrictions to support appropriate mental health treatments for children in foster care (see fig. 4). State and county officials from some of the selected states described physicians’ lack of knowledge of child and adolescent mental health issues as a challenge, and officials in all seven states said they developed practices to promote effective treatment decisions. For example, all seven of the selected states developed guidance on the use of psychotropic medications, such as dosage limits for children of different ages and weights, or medication lists that identify medications considered psychotropic. All seven states also require or recommend restricting who can prescribe psychotropic medications, or require or recommend the physician consult a specialist in some cases. For example, in New Jersey, only a psychiatrist, pediatric neurologist, neurodevelopmental pediatrician, or an advanced practice nurse certified in psychiatry or mental health and collaborating with one of these specialists may prescribe psychotropic medications, except in cases of ADHD. In Maryland, if the prescribing physician is not a child psychiatrist, he or she must consult with or refer the child to a specialist before prescribing a psychotropic medication and within 60 to 90 days after making the initial prescription. In addition, Maryland officials said the state’s Medicaid agency contracts with mental health specialists at the University of Maryland to review all antipsychotic medication prescriptions for children in Medicaid. State officials in all seven of the selected states said they require or recommend that physicians obtain agreement—sometimes in writing— from an adult who has responsibility for the child in foster care (informed consent) and from the child (assent) on prescriptions for psychotropic medications (see fig. 5). These practices are among the consistent elements across guidelines identified by HHS on informed and shared decision-making. In Washington, the child welfare agency requires agreement from the parent if the child is under age 13, or from the child if he or she is age 13 or older. If the parent of a young child is unavailable, unable, or unwilling to consent, the child’s caseworker must obtain a court order approving the use of psychotropic medication. In Maryland, caseworkers must collect a consent form signed by the parent or legal guardian as well as the child (if age 16 or older), when able. Similar to Washington, if the parent or guardian is unavailable or unwilling to provide consent, child welfare officials may obtain a court order in cases of medical necessity. State officials in all seven of the selected states said they have practices related to monitoring children: They track or recommend tracking of high- risk prescriptions, such as those involving antipsychotic medications or multiple medications taken at the same time, or they require or recommend periodic follow-up visits or reauthorization of certain prescriptions (see fig. 6). Monitoring medication use for each child in foster care is one of the consistent elements across guidelines identified in HHS guidance. In Illinois, child welfare officials said their agency conducts ongoing oversight of prescriptions by examining a list of children taking antipsychotics and children under age 6 referred for uncommon conditions, such as aggression or bipolar disorder; consulting with mental health specialists; and referring cases for intensive case management, as needed. California officials said county courts review psychotropic medications every 6 months, while in Maryland, officials said that, for children taking antipsychotic medications, they require physicians to monitor the child’s height, weight, tremors, liver functioning, and blood sugar and lipid levels to identify side effects. They must then submit results to mental health specialists for review. In addition, Maryland caseworkers are required to review positive and negative effects of medications at their monthly home visits. State officials in all seven of the selected states said they work to educate relevant stakeholders on mental health conditions and treatments. Practices to educate stakeholders are among the consistent elements across guidelines identified by HHS on sharing information on mental health and trauma-related interventions with clinicians, child welfare staff, and consumers. In addition, officials in most of the selected states said their state works to increase access to mental health services for children in foster care (see fig. 7). State officials from five of the seven selected states said they provide relevant stakeholders with access to informational materials. For example, California maintains an online information bank of evidence- based treatments, and Ohio officials developed guides to help children and their families communicate with physicians and participate more actively in treatment decisions. States may offer other types of informational resources as well. For example, Washington child welfare officials said they staff mental health specialists to a telephone hotline to provide physicians who call with consultations on mental health diagnoses and treatments as well as information about local service providers. In addition, officials said the state’s child welfare agency and managed care provider offer in-person and online trainings on children’s mental health, psychotropic medications and other mental health treatments, and the child welfare system. State and county officials in four of the seven selected states and five of nine national organizations identified limited access to mental health services as a challenge. These officials described a variety of factors limiting access, including insufficient numbers of professionals specializing in related fields, low Medicaid reimbursement rates, underserved rural areas, and physicians’ limited knowledge of services available in their area. As officials from one state explained, patients need access to a wider variety of evidence-based treatments. Other officials noted there are particular shortages among some specialties, such as child psychiatrists, or needed training in areas such as trauma- informed care. To increase children’s access to mental health services, state and county officials in five of the selected states said they provide remote consultation services. In addition, Ohio state officials said the state offers fellowships for medical students in needed mental health specialties as well as training curricula for students to provide mental health services as part of primary care. Officials in the seven selected states identified factors that helped them implement oversight practices for psychotropic medications, such as collaborating with other agencies, conducting outreach with relevant stakeholders, and gradually implementing new oversight practices. State officials in all seven of the selected states said strong collaboration among child welfare, Medicaid, or other partnering agencies was key to implementing these practices. Specifically, Washington officials said supporting children in foster care requires coordinated solutions across the agencies serving this population. In Ohio, officials said the directors of their child welfare, Medicaid, and mental health agencies have worked in the other agencies and as a result share resources and talent more easily and encourage open communication. In Washington, officials said strong collaboration allows their agencies to complement each other’s roles, develop more holistic practices, and implement oversight programs more effectively. Officials in Ohio and Washington attributed successful collaboration within their state to executive leadership support, the commitment and longevity of state agency leaders, and leaders’ and managers’ breadth of experience in multiple agencies and front-line roles. Washington officials emphasized the importance of developing integrated programs in order to institutionalize collaboration. State and county officials in three of the seven selected states said conducting outreach helped them educate stakeholders on relevant issues and requirements and gain stakeholders’ buy-in. For example, Maryland officials said conducting extensive outreach to physicians on how to best implement new requirements for medication approval and monitoring helped ensure stakeholder adoption of the program and was essential to its success. These officials said they shared information with physicians about antipsychotic medications, monitoring side effects, and available psychosocial services. In response to physicians’ feedback, Maryland officials said they adapted the program to allow physicians to call in required information over the phone to avoid having to complete forms. Officials in two of the selected states said the gradual rollout of new practices enabled mid-course corrections or supported higher rates of adoption or compliance with the practice. For example, Ohio officials said they developed their medications oversight program in several stages. One step entailed a pilot program that flagged potentially inappropriate prescriptions and required physicians to consult with mental health specialists. Through this program, officials said they identified a lack of mental health knowledge and access to mental health specialists as two causes of inappropriate prescriptions. They said they redesigned their oversight practices to address these causes, tested the new practices, and are now implementing them statewide. Our analysis of available data from the seven selected states show that four of these states—California, Illinois, New Jersey, and Washington— reduced the percentage of children in foster care on psychotropic medications from 2011 through 2015. Two other selected states— Arizona and Maryland—had steady rates of medication use. Ohio did not have data for this time period. Because states use different methodologies to collect data, these data cannot be compared across states. While identifying all the factors that contribute to reduced medication use can be difficult, Washington child welfare and Medicaid officials said their second opinion program, which requires physicians to consult a child psychiatrist when prescribing certain medications, has likely prevented inappropriate prescribing. A 2009 study on this program also found that it helped reduce ADHD medications that were provided in high doses, in combinations, and for children under 6. Child welfare and Medicaid officials in some selected states and officials from most national professional and research organizations we interviewed said reducing medications may not be appropriate for every child. For example, child welfare and Medicaid officials in one selected state explained that without medication, a child in foster care may not be able to sit through a therapy session or perform in school. In addition, officials in all seven states said a child’s mental health can be affected by many factors, including psychotropic medications, psychosocial services, and other situational and environmental factors. Rather than focusing on reducing medications overall, officials in these states said their goal is to ensure the child receives appropriate treatment, which may involve efforts related to all of the factors mentioned above. Child welfare and Medicaid officials in all seven of the selected states told us they use a variety of measures to gauge the results of their efforts related to psychotropic medication use among children in foster care. These measures generally examine physician prescribing patterns, state oversight practices, and child placement, health, education, and juvenile justice outcomes. For additional information on measures collected by each selected state, see appendix II. Physician prescribing patterns. All seven selected states examine data to better understand certain prescribing patterns, such as the use of antipsychotics, the use of multiple psychotropic medications at the same time, and dosage levels for the medications prescribed. Child welfare and Medicaid officials in all of these states said they have particularly focused on antipsychotic medications, in part due to concerns about inappropriate use of these medications and their potential negative side effects for children. Officials in five states told us their state has experienced reductions in antipsychotic use among children in foster care in recent years. Researchers have also noted that concerns about the use of antipsychotic medications spurred state efforts to oversee and improve prescription behavior, and a June 2016 study examining Medicaid data in 20 states from 2005 through 2010 found that trends in the use of these medications are no longer increasing. However, the study noted that current prescribing patterns for antipsychotics at the “new normal” rates of use remain of great concern to many stakeholders. Child welfare and Medicaid officials in one state also underscored the significance of examining prescribing patterns after they observed spikes in the use of certain medications. These officials said that after their state started requiring second opinions for ADHD prescriptions, they saw increases in prescriptions of antipsychotics, and after requiring second opinions on antipsychotics, they saw increases in prescriptions for the use of multiple psychotropic medications at the same time. Officials interpreted these patterns as showing that some physicians choose to prescribe certain medications partly to avoid their state’s oversight practices. These officials expressed concern that new requirements may cause increased medication use in other areas, which would need to be monitored. State oversight practices. While selected states may have similar reported practices for overseeing the use of psychotropic medications, they vary in whether they examine data to determine if their practices are followed. For example, while child welfare and Medicaid officials in all seven states told us their state requires some form of agreement or informed consent for a child’s treatment plan, officials in three of these states told us they review reports on whether such an agreement was obtained. In addition, while officials in six states said their state requires or recommends a caseworker or physician conduct follow-up visits with a child on psychotropic medications, officials in one of these states told us they examine data on the required follow-up, specifically for ADHD medications. Officials in three of these states told us they do examine data on whether a physician monitors the child’s metabolic health, including height, weight, and lipid panels. These measures can be used to monitor whether a child is experiencing any adverse effects as a result of taking medications. Child welfare and Medicaid officials in most selected states said they have particularly focused on ensuring children in foster care receive psychosocial services to help address experiences with trauma. Officials in all seven states said their state has guidelines that require or recommend the use of psychosocial services prior to or concurrently with a psychotropic medication, and officials in all of these states told us they examine data on the number of children in foster care who received such services. Officials in four of these states told us their state has increased the use of psychosocial services. For example, a 2013 study on physicians’ use of Washington’s telephone line for mental health consultations between 2008 and 2011 found a 132 percent increase in outpatient mental health visits for children currently or previously in foster care after a consultation. However, as mentioned earlier, child welfare and Medicaid officials in most of the selected states—as well as multiple studies—have noted continuing challenges with access to psychosocial services. While we did not assess states’ implementation of specific practices to oversee the use of psychotropic medications, findings from a 2016 California State Auditor report highlight the importance of having measures in place to ensure that state practices are followed. Specifically, the report found multiple cases where children in foster care received prescriptions for psychotropic medications without court authorization or parental consent, which, according to the report, is a violation of state law. Child placement, health, education, and juvenile justice outcomes. Most of the seven selected states collect data to monitor outcomes for children on psychotropic medications. For example, child welfare and Medicaid officials in four selected states told us they monitor information on whether a child in foster care on psychotropic medications experiences a placement disruption (the child is moved from one placement to another). Multiple studies have shown that placement disruptions are associated with increased mental health needs and poor social-emotional outcomes, and that problem behavior can be an indicator of risk for future placement disruptions. Since children may be prescribed psychotropic medications to help treat problem behaviors, examining data on disruptions can help states better understand whether children with problem behaviors on medications are improving or are still having serious behavioral problems. Some states also examine health, education, and juvenile justice outcome measures for children in foster care on psychotropic medications. Child welfare and Medicaid officials in a few states said such measures can help them determine whether the care and services provided to children in foster care are helping these children lead healthy and productive lives. For example, Illinois child welfare officials told us they examine whether a child under the age of 6 on a psychotropic medication has symptoms of self-harm or is hospitalized. In addition, Maryland child welfare officials told us they examine data on school enrollment and academic performance for children in foster care on psychotropic medications, whereas Washington officials told us they examine data on whether children in foster care with mental health needs, including those on psychotropic medications, have any involvement in the juvenile justice system. Child welfare and Medicaid officials in most of the seven selected states discussed common challenges in their efforts to collect data needed to oversee the use of medications by children in foster care and to monitor outcomes for these children. In five of the seven states, officials discussed technical issues with obtaining reliable data. They said data on psychotropic medications and other mental health services for children in foster care can involve data systems from state child welfare, Medicaid, and mental health agencies. In addition, they said data needed to gauge whether a child’s life improves with treatment—such as the child’s living situation and their health, education, and juvenile justice outcomes—can involve many other data systems, including those from state education and juvenile justice agencies. Because some of these agencies may not collect information specifically on the foster care population, officials said gathering these data may require data matching across these systems. In some selected states, officials said this information may also involve county-level agencies that can vary in the types of data they collect as well as one or more third-party managed care organizations that report to state Medicaid agencies. These officials said matching such data can be difficult and time-consuming, and state child welfare and Medicaid officials in three selected states discussed limitations with data gathering due to resource and time constraints, given other competing priorities. State child welfare and Medicaid officials in five selected states also discussed privacy concerns related to data sharing. For example, officials in two of these states said state agencies are reluctant to share sensitive data on individuals due to their concerns about privacy protections under the Health Insurance Portability and Accountability Act of 1996 for health information and the Family Educational Rights and Privacy Act for education information. Similarly, officials in two other states expressed uncertainty over the types of data they were able to share under state and federal laws. Child welfare and Medicaid officials in three of the five counties where we conducted interviews discussed similar privacy concerns with data sharing among county-level agencies. Child welfare and Medicaid officials in two of the five states that expressed privacy concerns also discussed concerns about sharing data with managed care organizations. They said they were in the process of determining how to share information specifically on the foster care population with these organizations as well as what data to collect from them. State child welfare and Medicaid officials in some selected states that were able to share data said they overcame privacy concerns through negotiating written agreements and educating stakeholders about sharing data consistent with state and federal privacy requirements. For example, Maryland child welfare and Medicaid officials said their agencies each formed an agreement to share data with the same contractor, who matched data on children in foster care with Medicaid data on claims for psychotropic medications and mental health services, and reported the information to state agencies without providing personal data. In addition, Maryland child welfare officials said their agency entered into an agreement with their state education agency to share education information for children in foster care. This agreement granted certain officials access to personal information, and these officials assigned anonymous identifiers to each child to protect their privacy while facilitating data sharing. California child welfare officials told us they recently worked with their state education and juvenile justice agencies to issue a letter that summarizes existing state and federal laws that pertain to the sharing of information and records between local education agencies, county child welfare agencies, and caregivers for children in foster care. In addition, Washington child welfare officials discussed data sharing agreements among multiple state agencies that allowed the state to match and share client-level information across more than 30 state data systems. These officials attributed the success of their agreements to strong state leadership and the development of trust and buy-in among the stakeholders involved. Since our 2014 review of psychotropic medications for children in foster care, ACF, CMS, and SAMHSA have continued to provide support to states—generally in the form of funding and technical assistance—to assist with oversight of these medications. Specifically, these efforts aim to support states’ practices for prescribing medications, for diagnosis and treatment options, and for implementing measures to assess the quality of health care delivery to children in foster care. Through funding and information sharing, SAMHSA, CMS, and ACF help state agencies with practices related to prescribing medication and their oversight efforts. For example, SAMHSA partly funded AACAP’s development of voluntary recommendations for states on the use of psychotropic medications for children and adolescents. As discussed earlier, these recommendations emphasize that holistic mental health treatment can include medication, but that medication should be only one part of the overall plan. State officials in most of our selected states said they reviewed AACAP recommendations when developing their own guidelines and oversight practices for children in foster care. For example, child welfare officials in Illinois said they worked with a representative of their state AACAP branch, among others, to develop guidelines on prescribing medications, which, according to these officials, were included in their state law. In addition, Ohio’s medication management and oversight program included clinical resources and prescribing guidelines for physicians based on AACAP’s recommendations. SAMHSA also supports child welfare agency staff and mental health stakeholders seeking to ensure the appropriate use of psychotropic medications for children in foster care through a contract with the Technical Assistance Network at the University of Maryland’s School of Social Work. SAMHSA funds a medical director position at the network, and that director works with 55 child and adolescent psychiatrists in state and county governments to address issues regarding psychotropic medication, including strategies to help ensure children receive appropriate treatment. Specifically, the director created a community listserv of child and adolescent psychiatrists to disseminate best practices and developed webinars on medication oversight. According to SAMHSA officials, the medical director, with input from the listserv community, is developing guidance on how to take youth off medication, an issue that child welfare officials in one selected state said can be a challenge. As these officials explained, psychiatrists generally are reimbursed more for medication management than psychotherapy, which can create a disincentive for keeping children off medication. Through the network’s Clinical Distance Learning Series, SAMHSA also developed webinars and issue briefs on the oversight of psychotropic medications for children on Medicaid, including one on developing performance measures. According to SAMHSA officials, the technical assistance network has also begun planning for a multi-year collaborative for residential treatment centers that have an interest in addressing the use of antipsychotic medications among the youth they serve. As they explained, the goal of this group is to increase best practices related to the use of antipsychotic medications for youth in residential care and reduce outlier practices, such as prescribing children too many medications or at dosages exceeding maximum levels based on labels approved by the Food and Drug Administration. Through funding, SAMHSA has continued to support a multi-year virtual learning community in which community participants receive technical assistance, including monthly e-newsletters, webinar invitations, and access tools and resources. SAMHSA-funded webinars have included issues such as cross-system data sharing, education and engagement of key stakeholders, phone psychiatric consultation models, and red flag and response systems for medication oversight. CMS, in its 2015 Quality Conference, hosted a Medicaid Track that included a session for Medicaid and health care professionals on physician prescribing patterns considered high risk because of the adverse side effects, including the use of antipsychotic medications. The National Committee for Quality Assurance, a nationally recognized quality improvement entity, presented the results of its efforts to develop measures to oversee medication use at the conference. According to CMS officials, this session prompted formation of a group for interested state Medicaid agencies and their partners on the use of antipsychotic medications for children. Eights states are participating in the group as of October 2016. These states are working on various projects to improve appropriate medication use and monitoring for possible side effects. In addition, in February 2016, CMS collaborated with the National Association of Medicaid Directors and the American Drug Utilization Review Society to host a national call for 12 Medicaid drug utilization review program directors to share their strategic efforts on their child antipsychotic monitoring programs with all other states and the District of Columbia. ACF developed two guides related to psychotropic medication use for children in foster care. The first guide provides tools to help children ask questions about medications as they meet with physicians. The second publication is a companion guide for child welfare staff and foster parents on mental health issues, the impact of trauma, and psychotropic medications. According to ACF officials, both guides have been distributed nationally and are posted on ACF’s Child Welfare Information Gateway website. HHS also helps states address mental health screenings for children— which affects their diagnosis and treatment options—and increase awareness of and access to trauma-informed care services among the child welfare workforce. Misdiagnosis and inappropriate medication use: ACF and the Centers for Disease Control and Prevention partnered to study the relationship between misdiagnosis and inappropriate medication use. Their study was spurred by prior research that examined a large sample of children in a child welfare population who underwent a comprehensive diagnostic evaluation. The prior study found that over 85 percent of children diagnosed with fetal alcohol spectrum disorder had never been previously diagnosed or had been misdiagnosed. For these children, the most common mental health diagnosis prior to the comprehensive evaluation was ADHD—a diagnosis that often leads to psychotropic medication prescriptions. ACF and the Centers for Disease Control and Prevention are currently completing a pilot study at a local child welfare agency to understand why a diagnosis of fetal alcohol spectrum disorder might be missed and what information agencies and families need in order to maximize outcomes for children and families. According to ACF officials, they also plan to gather information from other sites to understand this issue at a national level, and to develop training materials for caseworkers and caregivers on a child’s likely prenatal exposure to alcohol as well as the types of information that caseworkers and caregivers need to help these children. Trauma-informed care and evidence-based practices: ACF provides a variety of trauma-related grants, including grants focused on screening, assessment, treatment, and bridging the gaps between child welfare and mental health. Through support of another ACF grant, the National Center for Evidence Based Practice in Child Welfare provides training and capacity building for child welfare and mental health staff on trauma- focused therapy and on improving access to mental health services. In Washington, the state used ACF’s informational briefs on trauma for its own training and ACF grant funds to develop a handbook on trauma for foster care families. SAMHSA also provided competitive grants to a cohort of states to help increase quality of care and access to trauma-related services. In 2014, SAMHSA initiated an online and television campaign to inform the public about efforts to treat child trauma and resources available through the National Child Traumatic Stress Initiative. SAMHSA has also formed a relatively new partnership with the National Center for Trauma-informed Care to develop coordinated networks focused on treatments shown to be effective in treating mental health conditions. In addition, SAMHSA has provided funding for webinars and a national technical assistance program that supports research and training centers focused on trauma. CMS added a measure on the use of multiple antipsychotic medications to the 2016 Core Set of Children’s Health Care Quality Measures for Medicaid and the State Children’s Health Insurance Program. The core set is a voluntary set of measures that states may use to monitor and improve the quality of health care delivery to children covered under Medicaid, including those in foster care. In addition to the measure added by CMS, other measures related to children’s use of antipsychotic medications include behavioral or mental health counseling services and metabolic monitoring. Through its group on antipsychotic medication use in children, CMS facilitates information sharing and provides technical assistance to help eight state Medicaid agencies improve their evaluations of state programs through the voluntary use of these measures. This will be the first year that states may voluntarily report on the measure for the use of multiple antipsychotic medications at the same time among children and adolescents. According to CMS officials, they provided a technical assistance webinar in August 2016 to help states determine how to measure this information. CMS officials said that a key CMS goal is to encourage and support national reporting by state Medicaid agencies on a uniform set of measures to facilitate assessment of quality of care. Officials from most of the seven selected states we reviewed said they use or plan to use some or all of these measures. For example, in February 2017 California’s Medicaid agency will report to CMS measures on ADHD medication use and the use of multiple antipsychotic medications at the same time, among other things. The measures will include data on all children covered under their fee-for-service, managed care, and specialty mental health programs. In Ohio, state officials said they will build into managed care contracts two Healthcare Effectiveness Data and Information Set (HEDIS) measures as an oversight mechanism: the use of multiple antipsychotics at the same time in children and the first line psychosocial care for children and adolescents on antipsychotics. Although HHS has a variety of efforts to assist states in overseeing psychotropic medication use among children, since 2014 the agency has not convened meetings with all the relevant stakeholder groups needed to share information and work together on these issues. Under title IV-B of the Social Security Act, states are required to develop their plans for oversight and coordination of health care services for children in foster care in collaboration with the state Medicaid agency, and in consultation with pediatricians, experts in health care, and experts in and recipients of child welfare services. HHS’s guidance on implementing this provision and overseeing psychotropic medication use notes that state oversight should include coordination among and mechanisms to actively engage with child welfare, Medicaid, and mental health stakeholders to improve outcomes for this population. This guidance also discussed HHS’s goal of facilitating cross-system collaborations for the purposes of promoting improved behavioral health diagnosis, treatment, service delivery, and service tracking for children in foster care, which includes actions to increase oversight and monitoring of psychotropic medications. However, HHS’s assistance to states around collaboration has generally focused on a limited number of states or certain stakeholder groups. For example, SAMHSA hosted a day-long technical assistance meeting in 2015 for states with the capacity and commitment to implement improved cross- agency oversight of medication use, however, the meeting was limited to five states. In addition, at its August 2016 National Conference on Child Abuse and Neglect, HHS held a breakout session which focused on a range of issues, including evaluation and oversight of medication, and how effective psychotropic medication oversight systems work in concert with efforts to ensure access to effective psychosocial services. While the conference included this session, ACF officials said it was limited in scope compared to previous events it hosted on this issue. ACF also finalized regulations in June 2016 that established requirements for a new, optional comprehensive child welfare information system that states can use to maintain their child welfare data. If a state chooses to develop one, the new information system is required to support data exchanges with specified other systems, including Medicaid, court, and education systems, among other requirements. In the final rule, ACF stated that the new information system will provide child welfare agencies with the tools and flexibility to rapidly share data among multiple programs, including Medicaid and mental health. As discussed earlier, officials from child welfare and Medicaid agencies in most of the selected states spoke of challenges related to concerns about state and federal laws protecting individuals’ privacy. In 2013, we reported that state and local human services agencies, among others surveyed, identified challenges related to the interpretation of federal privacy requirements as they balance the need to protect clients’ personal information while increasing the use of data sharing. These challenges included confusion or misperceptions about what agencies are allowed to share as well as a tendency to be risk averse and overly cautious in their interpretation of federal privacy requirements. ACF officials told us some states interpret federal laws on protecting confidentiality to have barriers when there often are not barriers at the federal level. They added that they have used their confidentiality toolkit to debunk myths and concerns with data sharing, though child welfare officials we interviewed in a few of the selected states said they were not aware of a federal toolkit on data sharing. State officials in all seven selected states spoke of the importance of collaboration, and some said successful cross-agency collaboration has helped them oversee the use of psychotropic medications more effectively. Further, officials in most of these states said they benefitted from HHS’s national convening in August 2012 of state directors of child welfare, Medicaid, and mental health agencies to address the use of psychotropic medications for children in foster care and their mental health needs. The meeting (“Because Minds Matter”), hosted by ACF, CMS, and SAMHSA, provided an opportunity for state leaders to enhance their collaboration on the appropriate use of psychotropic medications. Officials we spoke with said it helped them develop prescribing guidelines and expand reporting on psychotropic medications. According to officials in one selected state, their child welfare agency worked with its mental health agency partner at the HHS meeting to develop its informed consent process. Likewise, the meeting was the impetus for another selected state’s child welfare and Medicaid agency partnership and its quality improvement project, according to state officials. This project involved engaging with multiple stakeholders throughout the state, forming work groups to study psychotropic medication use, and developing training materials and guidance on the proper use of these medications. In a third selected state, three lead agency directors and medical directors formed a team to provide clinical oversight of their foster care population. In addition, officials in one selected state said their participation in the meeting, and in other collaborative efforts, helped them learn about the work of other states in effective monitoring of mental health care for children in foster care, including an improved ability to monitor medication use. State officials in three of the seven selected states said more federal government leadership could help them work through ongoing challenges, including (1) obtaining best practices in medication use concurrent with other treatments; (2) overcoming siloes across child welfare, Medicaid, and mental health systems serving the foster care population; and (3) enhancing access to child and adolescent psychiatric resources. In addition, officials in selected states transitioning their foster care populations into managed care expressed concern about the transition, and the need to manage the transition to ensure optimal care coordination. Some of the concerns identified include (1) ensuring state agencies share data with the managed care providers to facilitate continuity of care, (2) bringing all the necessary stakeholders onboard during the transition to ensure a common understanding of concepts and roles, and (3) ensuring managed care plans have the needed tools to accommodate non-traditionally served populations that have high medical needs. State child welfare, Medicaid, and mental health officials in three selected states said having other events similar to the one hosted by HHS in 2012 could provide further support in addressing these challenges. While ACF, CMS, and SAMHSA have held various events with limited numbers of states or certain stakeholder groups, as mentioned above, they have not convened a 50-state meeting that includes child welfare, Medicaid, and mental health stakeholders since 2012 to continue discussions about how best to oversee psychotropic medications. ACF officials said they have no plans to hold a national convening of state agency stakeholders. They explained that, in response to recent mandates on efficient government, they have moved toward hosting more virtual meetings. While virtual meetings can be a useful and cost-effective tool in facilitating collaboration and information sharing, none of HHS’s virtual meetings on psychotropic medications have included most states and stakeholders across multiple services. HHS noted in its 2012 guidance to states on oversight of psychotropic medications that children in foster care are typically involved in multiple service delivery systems, and a coordinated, multi-system approach is necessary to meaningfully improve outcomes for this population. Additional efforts from HHS to include relevant stakeholders in collaborations to address continuing challenges can better position states in their work to improve practices to oversee medication use and effectively ensure appropriate treatments for the foster care population. Though the benefits of using psychotropic medications have been documented, the health risks or side effects associated with certain prescribing patterns—such as the use of multiple psychotropic medications at the same time and the use of antipsychotics—make it important to ensure that a given treatment is appropriate for addressing a child’s condition. State agencies in our seven selected states have taken steps to curb inappropriate prescriptions of psychotropic medications among children in foster care, often by collaborating with each other and with other stakeholders involved in the child’s care. Officials in these states credited HHS with helping them jumpstart or further their efforts by fostering collaboration and providing forums to share information at HHS’s 2012 conference. While the states included in our review have made efforts to improve medication oversight, selected state officials and their partners discussed a need for continued collaboration and information sharing to help effectively implement oversight practices, improve access to mental health services, share data, and monitor outcomes. Information on oversight practices can be especially important for states that may not be as far along in their efforts to oversee medication use as those selected for this review, or for those experiencing a period of change as they transition their foster care populations into managed care. While HHS has made efforts to help support states in their oversight activities, additional support from HHS to convene state child welfare and Medicaid agencies and other stakeholders could create opportunities for state agencies to learn from one another’s experience, collaboratively develop solutions to mitigate common challenges, strengthen oversight practices for psychotropic medications, and more effectively ensure appropriate treatments for children in foster care. To help states effectively address ongoing challenges related to ensuring the appropriate use of psychotropic medications for children in foster care, the Secretary of HHS should consider cost-effective ways to convene state child welfare, Medicaid, and other stakeholders to promote collaboration and information sharing within and across states on psychotropic medication oversight. We provided a draft of this report to the Secretary of HHS for review and comment. HHS agreed with our recommendation and provided some examples of a virtual convening of select groups of professionals and agencies it employed to facilitate information sharing and collaboration around different issues. We believe that convening child welfare, Medicaid, and mental health stakeholders across all 50 states, in virtual or other settings, is an important step towards helping these stakeholders ensure the appropriate use of psychotropic medications for children in foster care. HHS also provided additional information on their efforts to date to help states address medical and mental health care for children in foster care. Finally, HHS provided technical comments, which we incorporated as appropriate. A letter conveying HHS’s formal comments is reproduced in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS and interested congressional committees. The report will also be available at no charge on the GAO website at www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix discusses in detail our methodology for addressing three research questions: (1) how child welfare and Medicaid agencies in selected states work to ensure the appropriate use of psychotropic medications for children in foster care; (2) what is known about the results of their efforts, and (3) the extent to which the Department of Health and Human Services (HHS) helps states support the appropriate use of psychotropic medications for children in foster care. To address these questions, we reviewed relevant federal laws, regulations, and guidance. We interviewed HHS officials, and state officials in seven selected states and county officials in two of those states as well as officials in nine national professional and research organizations. We reviewed national, state, and county guidance and other documents identified by our interview subjects, and analyzed available data from selected states on medication use in foster care over a 5-year period. To address all objectives, we conducted in-person and telephone interviews with officials in seven selected states, including officials from child welfare and Medicaid agencies and other partners, and with nine national child welfare, Medicaid, and mental health professional and research organizations. The states we selected were Arizona, California, Illinois, Maryland, New Jersey, Ohio, and Washington. Our selection criteria included: (1) a high percentage of children in foster care and congregate care in the state when compared nationwide in fiscal year 2014; (2) variation in the type of Medicaid delivery system covering psychotropic medications and other mental health services for children in foster care (i.e., fee-for-service versus single or multiple managed care organizations) and in the type of child welfare system (i.e., state- versus county-administered); (3) recommendations from national organizations we interviewed for states that have or are in the process of implementing practices to oversee and monitor psychotropic medications; and (4) diversity in geographic location. In two states with county-administered child welfare systems, California and Ohio, we selected five counties and conducted interviews with officials from the respective county-level child welfare and Medicaid agencies, as appropriate. These counties were selected based on factors similar to those mentioned above as well as variation in population density (i.e., rural versus urban). Our findings cannot be generalized to states or counties outside our selection sample. In the report we use qualifiers, such as “a few,” “some,” and “most” to quantify responses from officials across our interviews with state and county child welfare and Medicaid agencies and their partners, such as universities and state or county mental health agencies. We reported the total number of the seven selected states in which at least one official or partner gave the reported response. These qualifiers are defined as follows: “All” states represents seven “Most” states represents five to six “Some” states represents three to four “A few” states represents two We interviewed representatives from nine national professional and research organizations selected to represent a variety of views on child welfare, Medicaid, and mental health-related policy and research. These organizations were: American Academy of Child & Adolescent Psychiatry, American Academy of Pediatrics, Center for Health Care Strategies, Child Welfare League of America, Medicaid and CHIP Payment and Access Commission, National Association of Medicaid Directors, National Association of Public Child Welfare Administrators, National Association of State Mental Health Program Directors, and National Alliance on Mental Illness. For all our interviews, we used a semi-structured interview protocol that included open-ended questions about state child welfare and Medicaid delivery systems; state and county oversight practices, including related challenges and measurement of outcomes; and federal efforts to assist states. Information was volunteered by officials in each interview in response to these open-ended questions. Thus, the counts of organizations citing such responses vary. We reviewed relevant documents to corroborate information obtained in our interviews, when possible. To examine how state child welfare and Medicaid agencies work to ensure the appropriate use of psychotropic medications, we also reviewed guidance and other documents identified by officials from selected states and counties. While we identified selected states’ oversight and monitoring practices related to psychotropic medications based on interviews and these document reviews, we did not assess the effectiveness of states’ implementation of these practices, nor did we evaluate their compliance with state or federal requirements or whether there are controls in place to help ensure that required practices are followed. In addition, while we focused our review on children in foster care, state oversight practices may also pertain to other children on Medicaid. We also reviewed guidance on oversight and monitoring of psychotropic medications for children issued by national health care professional organizations. In addition, we reached out to selected states’ audit agencies and HHS’s Office of Inspector General to identify past, ongoing, or planned work in this area. We also conducted a review of selected literature, including reports from academic, professional, and governmental organizations, related to the use of psychotropic medications and published since GAO’s report in April 2014. To examine the results of state efforts to ensure the appropriate use of psychotropic medications, we gathered and analyzed available data from selected states on the use of these medications among children in foster care from 2011 through 2015. We selected this range in order to gather data on 5-year trends that included the most recent data available. To examine the reliability of these data, we interviewed and sent a questionnaire to relevant state child welfare and Medicaid officials and examined the data received to identify any obvious outliers. We determined that these data were sufficiently reliable for the purposes of describing trends in the percentage of children in foster care on psychotropic medications for each of the selected states. However, because these states use different methodologies to collect data (e.g., states collected data for different time periods and ages of children in foster care), the data are not comparable among them. In addition, the results of our analyses are not generalizable nationwide. Information collected from selected states, such as on their oversight practices and measures collected to examine the results of their efforts, was provided to officials in each selected state for their review and verification. To examine HHS’s actions to support state efforts related to psychotropic medications, we interviewed officials from the Administration for Children and Families, Centers for Medicare & Medicaid Services, and Substance Abuse and Mental Health Services Administration, and reviewed relevant documents. In addition, we reviewed guidance on oversight and monitoring of psychotropic medications for children issued by HHS and used it as criteria for our recommendation. In addition to the contact named above, Elizabeth Morrison (Assistant Director), Claudine Pauselli (Analyst-in-Charge), Linda Collins, and Nhi Nguyen made key contributions to this report. Also contributing to this report were Seto Bagdoyan, James Bennett, David Chrisinger, Sarah Cornetto, Celina Davidson, Sara Edmondson, Sandra George, Katherine Iritani, Angie Jacobs, Kirsten Lauber, Hannah Locke, Sheila McCoy, Jonathan McMurray, and Jennifer Whitworth. Foster Children: HHS Could Provide Additional Guidance to States Regarding Psychotropic Medications. GAO-14-651T. Washington, D.C.: May 29, 2014. Foster Children: Additional Federal Guidance Could Help States Better Plan for Oversight of Psychotropic Medications Administered by Managed-Care Organizations. GAO-14-362. Washington, D.C.: April 28, 2014. Children’s Mental Health: Concerns Remain about Appropriate Services for Children in Medicaid and Foster Care. GAO-13-15. Washington, D.C.: December 10, 2012. Foster Children: HHS Guidance Could Help States Improve Oversight of Psychotropic Prescriptions. GAO-12-201. Washington, D.C.: December 14, 2011. Foster Children: HHS Guidance Could Help States Improve Oversight of Psychotropic Prescriptions. GAO-12-270T. Washington, D.C.: December 1, 2011.
|
GAO previously reported that children in foster care in five selected states were prescribed psychotropic medications at higher rates than other children on Medicaid. GAO also reported that some prescriptions were not supported by research and could pose health risks. GAO was asked to study efforts to oversee psychotropic medications for children in foster care since GAO last reported on the issue in 2014. GAO examined (1) how child welfare and Medicaid agencies in selected states ensure the appropriate use of psychotropic medications for children in foster care, (2) what is known about the results of their efforts, and (3) the extent to which HHS helps states support appropriate medication use. GAO reviewed relevant federal laws, regulations, and guidance; visited a nongeneralizable group of seven states and five counties in two of those states, selected by foster care population and diversity of location; analyzed selected states' data on medication use in foster care populations; and interviewed officials from federal, state, and county child welfare, Medicaid, and other agencies, as well as officials from nine relevant national organizations selected to represent a variety of views. State child welfare and Medicaid officials in seven selected states reported a variety of practices to support the appropriate use of psychotropic medications, which affect mood, thought, or behavior, for children in foster care. Practices include screening for mental health conditions, developing prescription guidelines, and monitoring a child's health while on medication. Additional state efforts aim to increase mental health knowledge among stakeholders and improve access to mental health services. However, officials in four selected states and from five national mental health organizations said limited access to mental health services was a challenge. Five of the selected states have begun offering remote consultation services that connect patients with mental health specialists. State officials said strong interagency collaboration and outreach to stakeholders helped them implement practices more effectively. While some selected states have reduced medication use among these children, states focused on other measures to gauge the results of their efforts. Four of the seven selected states reduced medication use from 2011 through 2015, two states had steady rates, and the remaining state did not have data during this time period. These data, however, cannot be compared across states because states use different methodologies to collect data. Officials in three selected states said reducing medication use may not be appropriate for every child, and officials in all seven states said they focus instead on measures such as tracking the use of medications that can have negative side effects and the use of psychosocial services (e.g., therapy) for children in foster care. Officials in most selected states discussed limitations with gathering data needed to oversee medication use, such as disparate data systems, resource constraints, and privacy concerns related to data sharing among state child welfare and Medicaid agencies and with managed care organizations. Officials in some states that shared data said they overcame privacy concerns through written interagency agreements and educating stakeholders. The Department of Health and Human Services (HHS) has taken steps to help state child welfare and Medicaid agencies support the appropriate use of psychotropic medications and identify mental health needs and treatments for children in foster care. HHS has focused its efforts on practices for prescribing, screening and diagnosis, and access to trauma-related services. HHS is also working with states to implement voluntary measures to track medication use, other mental health treatments, and a child's overall health. In 2012, HHS hosted a meeting for state leaders to help them establish effective medication oversight practices. Despite the positive outcomes resulting from this meeting, and HHS guidance that says an agency goal is to facilitate cross-system collaborations, such as in the oversight of psychotropic medications, it has not convened meetings with all stakeholders together since 2012. Though HHS has conducted webinars, created learning communities, and convened smaller meetings, HHS officials said it has no plans to convene all stakeholders as it did in 2012 due to resource constraints. Officials in three selected states said more federal support to bring together state stakeholders could help address ongoing issues, such as privacy concerns around data sharing. GAO recommends that HHS consider cost-effective ways to convene state child welfare, Medicaid, and other stakeholders to promote collaboration and information sharing on psychotropic medication oversight. HHS agreed with GAO's recommendation and provided technical comments.
|
ATSA established TSA and charged it with responsibility for securing all modes of transportation, including civil aviation. Prior to ATSA and the establishment of TSA, passenger and baggage screening had generally been performed by private screening companies under contract to airlines and in accordance with FAA regulations. In accordance with ATSA, TSA currently employs personnel who screen passengers at the vast majority of TSA-regulated (also referred to as commercial) airports nationwide. On November 19, 2002, pursuant to ATSA, TSA began a 2-year pilot program at 5 airports using private screening companies to screen passengers and checked baggage. In 2004, at the completion of the pilot program, and in accordance with ATSA, TSA established a permanent program known as the Screening Partnership Program whereby any airport authority, whether involved in the pilot or not, could request a transition from federal screeners to private, contracted screeners. Each of the 5 pilot airports applied and was approved to continue as part of the SPP, and since its establishment, 20 additional airport applications have been accepted by the SPP. Once an airport is approved for SPP participation and a private screening contractor has been selected, the contract screening workforce assumes responsibility for screening passengers and their property and must adhere to the same security regulations, standard operating procedures, and other TSA security requirements followed by federal screeners at commercial airports. TSA’s SPP PMO, located within TSA’s Office of Security Operations (OSO), coordinates with local TSA officials to support an airport’s transition from federal to private screening operations and supports the day-to-day management of the SPP. The PMO facilitates the SPP application process by reviewing SPP applications, organizing SPP application review meetings with other relevant TSA offices, and preparing and routing relevant application documentation to these offices and the TSA Administrator. Along with the TSA Office of Acquisition, the office plays a significant role in contract oversight and administration, as well as actively participates in contract source selection processes. TSA’s FSDs provide day-to-day operational direction for security operations at the airports within their jurisdiction, including those participating in the SPP. However, FSD management responsibilities differ at airports using federal versus private screeners. For example, at airports with a federal workforce, the FSD directly supervises and controls the screening workforce. However, at SPP airports, the FSD has responsibility for overall security but does not have direct control over workforce management; rather the SPP contractor is contractually obligated to effectively and efficiently manage its screening workforce. The SPP contractor’s responsibilities include recruiting, assessing, and training screening personnel to provide security screening functions in accordance with TSA regulations, policies, and procedures. SPP contractors are also expected to take operational direction from TSA, through the FSDs, to help ensure they meet the terms and conditions of the contract. In addition, SPP contractors are rewarded for identifying and proposing ideas that TSA accepts for possible innovations in recruiting, training, and security procedures, such as the practice of conducting pre- hire orientations to inform prospective screener candidates of the position requirements, which is 1 of over 200 ideas submitted to TSA by SPP contractors to date. In March 2012, TSA revised the SPP application to reflect requirements Among other of the FAA Modernization Act enacted in February 2012.provisions, the act provides that Not later than 120 days after the date of receipt of an SPP application submitted by an airport operator, the TSA Administrator must approve or deny the application. The TSA Administrator shall approve an application if approval would not (1) compromise security, (2) detrimentally affect the cost-efficiency of the screening of passengers or property at the airport, or (3) detrimentally affect the effectiveness of the screening of passengers or property at the airport. The airport operator shall include as part of its application submission a recommendation as to which private screening company would best serve the security screening and passenger needs of the airport. Within 60 days of a denial TSA must provide the airport operator, as well as the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Homeland Security of the U.S. House of Representatives, a written report that sets forth the findings that served as the basis of the denial, the results of any cost or security analysis conducted in considering the application, and recommendations on how the airport operator can address the reasons for denial. All commercial airports are eligible to apply to the SPP. To apply, an airport operator must complete the SPP application and submit it to the SPP PMO, as well as to the airport FSD, by mail, fax, or e-mail. As required by the FAA Modernization Act, not later than 120 days after the application is received by TSA, the Administrator must make a final decision on the application. Figure 1 illustrates the SPP application process. Although TSA provides all airports with the opportunity to apply for participation in the SPP, authority to approve or deny the application resides in the discretion of the TSA Administrator. According to TSA officials, in addition to the cost-efficiency and effectiveness considerations mandated by FAA Modernization Act, there are many other factors that are weighed in considering an airport’s application for SPP participation. For example, the potential impact on the workload of the Office of Information Technology and the potential impact of any upcoming projects at the airport are considered. SPP PMO officials said that by considering all relevant factors, they do not expect to identify a specific piece of information that would definitively deny an application’s approval based on the standards in the FAA Modernization Act. However, in doing so, they hope to ensure that the Administrator has the complete picture and could therefore make a decision using all factors in combination, consistent with the FAA Modernization Act. Nonetheless, factors found to be cost-prohibitive are likely to result in the airport being denied participation in the program. In May 2007, TSA awarded a contract to Catapult Consultants to conduct a cost and performance analysis of airports with private screeners versus This analysis would be used to assist airports with federal screeners.senior TSA leadership with strategic decisions regarding the degree to which TSA should leverage public/private partnerships in the area of screening services. According to the December 2007 report the contractor issued on its analysis, SPP airports performed at a level equal to or better than non-SPP airports for the four performance measures included in the analysis. Following this study, in February 2008, TSA issued a report on a study TSA conducted comparing the cost and performance of screening at SPP and non-SPP airports. The study compared performance measures at each of six SPP airports to the non- SPP airports in the same airport category and found that SPP airports generally performed consistently with non-SPP airports in their category for the performance measures included in its analysis. Since the inception of the SPP in 2004, 29 airports have applied for participation in the program; 25 airports have been approved, and as we noted earlier in this report, 16 airports are participating in the SPP as of October 2012. A detailed timeline and status of each airport application are provided in figure 2 and appendix II. Nine airports were approved but are not currently participating in the program because they are either (1) in the process of having an SPP contractor procured, (2) were once part of the SPP but ceased screening services when commercial airline service placing the airport under TSA regulation was discontinued, or (3) never transitioned to the SPP because commercial airline service bringing the airport under TSA regulation to these airports was discontinued before private screening services began. Specifically, 6 airports—West Yellowstone Airport, Montana; Orlando Sanford International Airport, Florida; Glacier Park International Airport, Montana; Sacramento International Airport, California; Bert Mooney Airport, Montana; and Bozeman Yellowstone International Airport, Montana—have been approved but are not yet currently participating in the SPP pending TSA’s selection of the screening contractor to provide services at each airport. Two airports—the East 34th Street Heliport, New York, and Gallup Municipal Airport, New Mexico were participating in the SPP, but according to TSA officials, the air carriers servicing these airports discontinued service after the contract was awarded, and thus these airports no longer required TSA screening services. Additionally, Florida Keys Marathon Airport, Florida, was approved for participation in the SPP, but the air carrier servicing the airport discontinued services prior to the start of the screening contract, and accordingly screening services were no longer required. TSA denied applications from 6 airports—submitted from March 2009 through December 2011. Five of these applications were submitted to TSA before the Administrator announced in January 2011 that the agency would not expand the SPP beyond the then current 16 airports “unless a clear and substantial advantage to do so emerges in the future.” The sixth application was submitted for consideration approximately 1 week after the Administrator’s announcement. Prior to the enactment of the FAA Modernization Act in February 2012, 1 of the 6 airports whose application TSA denied re-applied under TSA’s “clear and substantial advantage” standard and was approved. Following enactment of the FAA Modernization Act, which provided that TSA shall approve an application if approval would not compromise security or detrimentally affect the cost- efficiency or the effectiveness of the screening of passengers or property at the airport, TSA approved the applications of 3 other airports who reapplied. Two of the 6 airports that had been denied never reapplied for participation in the SPP (see fig. 2 for additional details). Figure 3 and appendix III show the locations of the 16 airports currently participating in the SPP as well as the 6 airports that TSA recently approved for participation. Directions: Place mouse over each symbol for airport category and SPP status N.Dak. Minn. S.Dak. Wisc. N.Y. N.H. Wyo. Mich. Mass. Pa. R.I. Nebr. Ind. Conn. Colo. Ill. N.J. Kans. W. Va. Mo. Va. Ky. Del. N.C. Md. N.Mex. Tenn. Okla. D.C. Ark. S.C. Miss. Ala. Ga. Tex. La. Fla. As figure 3 shows, 10 of 16 of the airports currently participating in the SPP are smaller, category III and IV airports, with 9 of those located in the western region of the United States. In recent years, the number of airports applying for participation in the SPP has generally declined. Specifically, from 2004 through 2008, 21 airports applied to the SPP, including the 5 airports that participated in TSA’s SPP pilot program. Since 2009, TSA has received SPP applications from 8 airports. Airport operators we surveyed and interviewed, as well as aviation industry stakeholders (i.e., aviation associations) and TSA officials we interviewed, most commonly cited customer service and staffing flexibility as advantages of participating in the SPP, but also expressed concerns about the SPP transition process and satisfaction with existing TSA screening services as potential disadvantages of participating in the program. We surveyed 28 airport operators who had applied to the SPP from its inception in 2004 through April 2012. Twenty-six operators responded. Because all 26 survey respondents were airport operators who have applied to the SPP, these airport operators may be more likely to present positive views of, or what they perceived of, the SPP. In addition, perspectives may also be influenced by whether or not the operators were approved for participation in the SPP at the time the survey was conducted. We also interviewed 6 airport operators that were not included in our survey. Five of these airport operators have not applied for participation in the SPP, and 1 airport operator had applied for participation after our survey was conducted, and therefore was not included as part of our survey. Our 2012 survey and interviews of airport operators include the following highlights: The advantages most frequently identified by the airport operators that had applied to the SPP and responded to our survey and those we interviewed (including those that had not applied to the SPP) were related to providing better customer service and obtaining flexibility assigning staff. The airport associations most commonly cited obtaining flexibility in assigning staff as an advantage. Because TSA generally remains neutral regarding the SPP, the views of TSA officials expressed are attributed to the individual FSDs we interviewed and do not reflect the views of the agency. Customer service. Sixteen airport operators we surveyed and interviewed reported customer service as an advantage—15 had applied to the SPP and 1 had not. Specifically, 14 of 26 airport operators responding to the survey indicated this was a realized or potential advantage to a great or very great extent. In addition, 2 of the 6 airport operators we interviewed, 1 of which applied to the SPP, stated that the level of customer service provided by security screeners is particularly important for smaller community-based airports. These airports constitute the majority of the airports participating in the SPP, because passengers who have negative encounters with the screening process generally associate their experiences with the specific airport. Thus, airport officials stated that this might increase the likelihood that the passengers involved will seek alternative modes of transportation or different airports for future travel. Representatives from the three airport associations we interviewed did not identify customer service as an advantage of the SPP. TSA officials stated that federal screeners can and do provide similar levels of customer service and that most commercial airports are content to have a TSA workforce at their airports. TSA also stated that customer service is an important aspect of their work, and that the agency is taking steps to improve customer service in a way that does not jeopardize the agency’s core mission, which is to ensure the security of the traveling public. Specifically, TSA officials said that they have enhanced their performance management processes to better gauge customer service, such as tracking negative contacts received at airports. Staffing flexibility. Fifteen airport operators we surveyed and interviewed—14 had applied to the SPP and 1 had not—and representatives from two aviation industry associations reported that private screening contractors are generally more responsive and flexible than TSA to increasing staffing needs in response to fluctuations in passenger volume at the airport. Specifically, 13 of 26 airport operators responding to our survey cited flexibility in assigning staff as a realized or potential advantage to a great or very great extent of participating in the SPP. Two of the 6 airport operators we interviewed, 1 of which had applied to the SPP, also cited staffing flexibility as an advantage. For example, an airport operator highlighted challenges the airport has faced in adjusting the number of screening staff to accommodate the seasonal changes in passenger volume at his airport. Specifically, the airport operator, a current SPP participant, commented that unlike TSA screeners, private screening contractors are able to staff screeners in split shifts—a work period divided into two or more periods of time, such as morning and evening, with a break of several hours between—thereby enabling them to adjust to the airport’s flight schedule and changes in passenger volume. TSA officials disagreed with this view and stated that TSA provides FSDs with discretion to utilize federal screeners in split shifts during the course of the workday, provided that such discretion is exercised as the direct result of operational need. Furthermore, TSA officials stated that all category IV and many category III airports use split shifts. Four of six FSDs we interviewed cited a reduced involvement in human resource management as an advantage to the federal government for participating in the SPP. For example, one FSD said that because TSA oversees the screening operations of SPP airports and FSDs are not involved with deploying and managing screening staff, they are better able to focus on their security oversight functions, including ensuring that proper standard operating procedures are being followed. Cost savings. During our follow-up interviews with survey respondents, 4 airport operators said that participating in the SPP could help alleviate TSA resource constraints and result in cost savings to the federal government because some airports that are currently participating in or applied for participation in the SPP are located in certain rural or high-cost communities where the federal government has difficulty hiring screeners and must utilize federal personnel deployed for temporary assignments, which results in increased costs. An FSD of an SPP airport located in a small, high-cost community we interviewed agreed that the salary offered by TSA made it difficult to fill screening positions at the airport, stating that prior to the airport’s transition to the SPP, TSA had difficulty hiring screeners from the local area, and as a result had to use screeners from its National Deployment Force (NDF), a deployable federal screening workforce, because of the high cost of living in the area. To maintain the requisite level of screening services at airports in environments where it is hard to recruit, TSA often uses screeners from its NDF, which TSA stated can be more expensive than SPP screeners because the NDF screeners are compensated on a per diem basis when deployed and incur other costs such as temporary housing expenses. Airport operators generally cited few realized or potential disadvantages of participating in the SPP. Six airport operators we surveyed and interviewed cited the discontinuation of federal screening services as a potential disadvantage of participating in the SPP. Specifically, the 4 of 25 survey respondents who had applied to the SPP program cited the discontinuation of federal screening services as a potential disadvantage of participating in the SPP. In addition, 2 airport operators who have not applied to the SPP expressed concerns about the potential disruption associated with the transition from TSA screeners to private screeners at their airports, and the associated risk of doing so if the process does not proceed as smoothly as intended. One of these airport operators stated that concerns about the transition process—going from federal screeners to private screeners—is the primary reason the airport has not submitted an application. Further, this airport operator also cited concerns about maintaining screener morale, and hence security, as a major reason for Officials from the aviation the airport’s decision to not apply to the SPP.industry associations we interviewed did not cite any realized or potential disadvantages. As noted earlier, TSA generally remains neutral regarding the SPP, and accordingly did not cite disadvantages of participating in the SPP. Additionally, airport operators from 3 airports that have not applied to the SPP expressed no interest in the SPP, and stated that they are generally satisfied with the level of screening service provided by TSA. Similarly, an Airport Council International-North America (ACI-NA) March 2007 study found that 71 percent of 31 survey respondents were not interested in the SPP, and cited satisfaction with TSA screening services, among other things, for not having any interest in the SPP. When asked, representatives from all three aviation industry associations we interviewed either expressed no opinion on the SPP or cited no disadvantages to participating in the SPP. Two of these industry representatives added that the majority of the airports they represent are generally satisfied with the screening services provided by TSA. TSA has developed some resources to assist applicants; however, it has not provided guidance on its application and approval process to assist airports with applying to the program. As the application process was originally implemented, TSA required that an airport operator interested in applying to the program submit an application stating its intention to opt out of federal screening as well as its reason(s) for wanting to do so. However, in 2011, TSA revised its SPP application to reflect the “clear and substantial advantage” standard announced by the Administrator in January 2011. Specifically, TSA requested that the applicant explain how private screening at the airport would provide a clear and substantial advantage to TSA’s security operations. At the time, TSA did not provide written guidance to airports to assist them in understanding what would constitute a “clear and substantial advantage to TSA security operations” or TSA’s basis for determining whether an airport had established that opting out would present a clear and substantial advantage to TSA security operations. TSA officials told us that they did not issue guidance at the time in conjunction with the new standard because the agency desired to maintain a neutral position on the SPP and did not want to influence an airport’s decision to participate in the program. In the absence of such guidance, SPP officials told us that they were available to provide assistance, if requested, to airports that sought assistance or information on completing their application. In March 2012, TSA again revised the SPP application in accordance with provisions of the FAA Modernization Act enacted in February 2012. Among other things, the revised application no longer includes the “clear and substantial advantage” question, but instead includes questions that request applicants to discuss how participating in the SPP would not compromise security at the airport and to identify potential areas where cost savings or efficiencies may be realized. Additionally, in accordance with the FAA Modernization Act, applicants must recommend a contractor that would best serve the security screening and passenger needs of the airport. TSA officials told us that the agency offers potential applicants numerous points of contact and methods with which the applicants can discuss the program before applying to participate. Specifically, applicants can discuss the program with their FSD, the SPP program manager, or their recommended screening contractor. Further, according to TSA officials, once an airport operator submits an application, TSA assigns a program official as a point of contact for the application, and works with the applicant to ensure the application is complete and to keep the applicant informed. TSA also provides general instructions for filling out the SPP application as well as responses to frequently asked questions (FAQ). However, TSA has not issued guidance to assist airports with completing the new application and has not explained to airports how it will evaluate applications given the changes brought about by the new law. Neither the current application instructions nor the FAQs address TSA’s SPP application evaluation process or its basis for determining whether an airport’s entry into SPP would compromise security or affect cost-efficiency and effectiveness. We interviewed 4 of the 5 airport operators that applied to the SPP since TSA revised its application in the wake of the FAA Modernization Act. Three of the 5 told us that they struggled to answer the application questions related to the cost-efficiency of converting to the SPP because they did not have data on federal screening costs, while the fourth airport operator said that she did not need additional information or guidance to respond to the question. One of the 4 airport operators stated that he needed the cost information to help demonstrate that his airport’s participation in the SPP would not detrimentally affect the cost-efficiency of the screening of passengers or property at the airport and that he believes not presenting this information would be detrimental to his airport’s application. However, TSA officials said that the cost information required to answer the questions is basic cost information that airports should already maintain and that airports do not need to provide this information to TSA because, as part of the application evaluation process, TSA conducts a more detailed cost analysis using historical cost data from SPP and non-SPP airports. TSA officials added that the SPP application and the cost information requested only serve to alert TSA of things it may not be already aware of about the airport. The absence of cost and other information in an individual airport’s application, TSA officials noted, would not materially affect the TSA Administrator’s decision on an SPP application. Three of the 4 airport operators we interviewed, and whose applications TSA subsequently approved after enactment of the FAA Modernization Act, said that additional guidance would have been helpful in completing the application and determining how TSA evaluates the applications. A representative from 1 of the 3 airports stated that while TSA officials have been more responsive and accessible since enactment of the FAA Modernization Act, the agency has not necessarily been helpful with the application process. Moreover, all 4 airport operators we interviewed told us that TSA did not specifically assign a point of contact when they applied to the program. Rather, all 4 airport operators reported consulting the SPP PMO, their FSD, or their recommended contractor because they needed information on such issues as screening cost, the list of current SPP contractors, TSA screener staffing levels, and examples of additional information they should provide TSA because they could not answer some of the application questions without this information. Specifically, 1 of the 4 airport operators reported contacting the FSD to request assistance with completing the application, while 2 of the four said they did not because FSDs generally are not knowledgeable about the program or are able to provide only general as opposed to detailed information about the application process. Instead of contacting their FSDs, these 2 airport operators told us that they contacted the SPP PMO and stated that the office were helpful in providing general information, such as a list of current SPP contractors, but not screening cost or other specific application information that would help the airports demonstrate whether the use of private screeners would compromise security or detrimentally affect the cost-efficiency or effectiveness of the screening of passengers or property at the airport. Another airport operator who reported contacting the SPP PMO stated that she learned about TSA’s SPP selection criteria and processes in the course of her discussions with one of the SPP managers with whom she had developed a working relationship over the years, and added that had she not contacted this particular manager, she would not have obtained this information because TSA does not publish the information for other airports that may be interested in obtaining the information. Three of the 4 airport operators who told us they sought information to complete their application from their recommended contractor as advised by TSA stated that the contractors told them they did not have the necessary cost information to assist the airports with responding to the application questions related to the cost-efficiency of converting to the SPP. Following enactment of the FAA Modernization Act, TSA officials initially stated that application guidance is not needed because the “clear and substantial” basis for joining the SPP has been eliminated and responses to the two new application questions related to cost-efficiency and effectiveness are optional responses. However, the Assistant Administrator for the Office of Security Operations now agrees that providing additional high-level guidance on the kind of information TSA considers during the application review phase would be helpful to SPP applicants. TSA SPP officials also stated that they routinely talk about the SPP at industry briefings and that they have done a good job of explaining the new application to industry. However, as of September 2012, representatives of all three aviation industry associations we interviewed told us that TSA has not provided any information on the SPP to their association since enactment of the FAA Modernization Act in February 2012. Additionally, representatives of two of the three aviation industry associations said that providing guidance or information on the criteria TSA uses to evaluate applications would be helpful to their members, while a representative from the third aviation association that represents domestic and international airline carriers said that its members would appreciate any basic information on the SPP. In interviews we conducted prior to the enactment of the FAA Modernization Act, these same aviation industry representatives told us that the absence of guidance provided by TSA is a barrier to applying to the program. They added that most airports do not want to invest in preparing an application when they are unsure as to how it would be evaluated by TSA. TSA has approved all applications submitted since enactment of the FAA Modernization Act; however, it is hard to determine how many more airports, if any, would have applied to the program had TSA provided application guidance and information to improve transparency of the SPP application process. In the absence of such application guidance and information, it will be difficult for more airport officials to evaluate whether their airports are good candidates for the SPP or determine what criteria TSA uses to accept and approve airports’ SPP applications. Further, airports may be missing opportunities to provide TSA with cost and other information that TSA would find useful in reviewing airport applications. According to Standards for Internal Control in the Federal Government, internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. Clear guidance for applying to the SPP could improve the transparency of the SPP application process and help ensure that the existing application process is implemented in a consistent and uniform manner. TSA improved its set of screener performance measures in 2012 by adding measures that address passenger satisfaction, thereby ensuring that the measures address all aspects of the agency’s airport screening strategic goals and mission. However, a mechanism to monitor private versus federal screener performance could help TSA to routinely ensure that the level of screening services and protection provided at SPP airports continues to be conducted at acceptable levels provided at non- SPP airports, and could help inform TSA managers when making decisions regarding the future of the SPP, such as whether to expand the program to more non-SPP airports. While we found differences in screener performance between SPP and non-SPP airports, those differences cannot be entirely attributed to the use of either private or federal screeners. We analyzed screener performance data for four measures and found that while there are differences in performance between SPP and non- SPP airports, those differences cannot be exclusively attributed to the use of either federal or private screeners. We selected these measures primarily based on our review of previous studies that compared screener performance of SPP and non-SPP airports as well as on our interviews with aviation security subject matter experts, including TSA’s FSDs, SPP contractors, and airport and aviation industry stakeholders. We also selected performance measures for which TSA has, for the most part, consistently and systematically collected data from fiscal year 2009 through 2011.performance at SPP and non-SPP airports are TIP detection rates, recertification pass rates, Aviation Security Assessment Program (ASAP) test results, and Presence, Advisement, Communication, and Execution (PACE) evaluation results (see table 1). For each of these four measures, we compared the performance of each of the 16 SPP airports with the average performance for each airport’s category (X, I, II, III, or IV), as well as the national performance averages for all airports for fiscal years 2009 through 2011. While it is useful for TSA managers to compare an SPP airport’s performance against its airport category for TIP detection rate and recertification pass rate in the PMRs, it is also important that the set of measures used to compare screener performance at SPP and non-SPP airports address a variety of agency priorities, such as passenger satisfaction. For more on the key attributes of successful performance measures, see appendix V. information on the format of this tool and how it will be used. Further, neither the Scorecard nor the PMR provides information on performance in prior years nor controls for variables that TSA officials explained to us are important when comparing private and federal screener performance, such as the type of X-ray machine used for TIP detection rates. Monitoring private screener performance in comparison with federal screener performance is consistent with the statutory requirement that TSA enter into a contract with a private screening company only if the Administrator determines and certifies to Congress that the level of screening services and protection provided at an airport under a contract will be equal to or greater than the level that would be provided at the Further, according to TSA airport by federal government personnel.guidance on the SPP, one of TSA’s major goals for the SPP is that private screeners must perform at the same or better level as federal screeners. A mechanism to monitor private versus federal screener performance would better position TSA to know whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non-SPP airports. TSA officials stated that it is not TSA’s goal to ensure that SPP airports continue to perform at levels equal to or greater than non-SPP airports, but to ensure that all airports operate at their optimal level, which they monitor using across-the-board mechanisms, such as the Scorecard. However, monitoring private versus federal screener performance could also help TSA to identify positive or negative trends in SPP performance that could lead to improvements in the program and TSA’s monitoring of SPP airports in general, and inform decision-making regarding potential future expansion of the SPP. TSA faces a daunting task in ensuring that a screening workforce is in place to consistently implement security protocols across the nation’s commercial airports while facilitating passenger travel. Questions about the performance of private screeners compared with federal screeners, recently enacted statutory provisions, and changes to the program’s application and approval process underscore the need for TSA to ensure that the program’s application requirements are clearly defined and consistently applied so that aviation stakeholders have a full and fair opportunity to participate in the program. Thus, a well-defined and clearly documented application guideline that states (1) the criteria and process that TSA is using to assess airport’s participation in the SPP, (2) how TSA will obtain and analyze cost information regarding screening cost- efficiency and effectiveness and the implications of not responding to related application questions, and (3) specific examples of additional information airports should consider providing to TSA to help assess airports’ suitability for SPP could benefit TSA. Specifically, guidelines could help alleviate airports’ uncertainty about the application process and better inform TSA to determine whether to approve an airport’s SPP application. It is also incumbent on TSA to be capable of determining if airports participating in the program are performing at a level that is equal to or greater than the level of security that would be provided by federal screeners at the airports through regular monitoring and reporting. Although not a prerequisite for approving an application for participation in the SPP, TSA must certify to Congress that the level of screening services and protection provided by a private screening contractor will be equal to or greater than the level that would be provided at the airport by federal government personnel before entering into a contract with a private screening company. While TSA regularly tracks screener performance at all airports and reevaluates the measures it uses to assess this performance, TSA has not conducted regular reviews comparing private and federal screener performance and does not have plans to do so. Regular comparison reviews would enable TSA to know whether the level of screening services provided by private screening contractors is equal to or greater than the level provided at non-SPP airports. These reviews could also assist TSA in identifying performance changes that could lead to improvements in the program and inform decision making regarding potential expansion of the SPP. To improve TSA’s SPP application process and to inform decisions regarding the future of the SPP, we recommend that the Secretary of the Department of Homeland Security direct the Administrator of TSA to take the following two actions: develop guidance that clearly (1) states the criteria and process that TSA is using to assess whether participation in the SPP would compromise security or detrimentally affect the cost- efficiency or the effectiveness of the screening of passengers or property at the airport; (2) states how TSA will obtain and analyze cost information regarding screening cost-efficiency and effectiveness and the implications of not responding to the related application questions; and (3) provides specific examples of additional information airports should consider providing to TSA to help assess an airport’s suitability for SPP, and develop a mechanism to regularly monitor private versus federal screener performance. We requested comments on a draft of the sensitive version of this report from TSA. On November 7, 2012, DHS provided written comments, which are reprinted in appendix VI and provided technical comments, which we incorporated as appropriate. DHS generally concurred with our two recommendations and described actions planned to address them. Specifically, DHS stated that TSA will provide as much information as is prudent on how the agency would evaluate if an airport’s participation in the SPP would compromise security or detrimentally affect the cost-efficiency or the effectiveness of the screening of passengers or property at the airport. Further, DHS stated that TSA will provide general categories of information in the SPP application guidance it plans to issue and will continually review the guidance to ensure that airports are comfortable with the SPP application process and understand how all the information provided will be used to evaluate their application. TSA expects to post an overview of the SPP application process to the agency’s website by November 30, 2012, that would specify details on the data it will use to assess applications and discuss its cost-estimating methodology and definition of cost efficiency. We believe that these are beneficial steps that would address our recommendation once adopted, and help address stakeholder concerns about the transparency of the SPP application process. DHS stated that starting in the first quarter of fiscal year 2013, TSA will produce semi-annual reports that will include an evaluation of SPP airport performance against the performance of TSA airports as a whole, as well as performance against each SPP airport category. Additionally, DHS noted that TSA is in the initial planning phase of deploying an electronic data collection system to facilitate systematic collection and reporting of SPP data, as well as TSA oversight of SPP contractor activities. Deployment of the electronic data collection system is targeted for the latter part of fiscal year 2013. Once implemented, these new reporting mechanisms will address our recommendation by facilitating TSA’s efforts to assess private versus federal screener performance. We are sending copies of this report to the Secretary of Homeland Security, the TSA Administrator, the House Infrastructure and Transportation Committee, and other interested parties. In addition, the report is available at no charge on the GAO web-site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4379 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on that last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This appendix describes how we did our work to address (1) the status of Screening Partnership Program (SPP) applications, and airport operator, other stakeholder, and the Transportation Security Administration’s (TSA) views on the advantages and disadvantages of participating in the SPP; (2) the extent to which TSA has provided guidance to govern the SPP application process; and (3) the extent to which TSA assesses and monitors the performance of private and federal screeners. To address all three of these objectives, we interviewed Federal Security Directors (FSD); airport operators; screeners; and where applicable, SPP contractors at 10 airports. We selected the 10 airports by matching an SPP to a non-SPP airport, in each of the five airport categories (category X, I, II, III, and IV), based primarily on (1) annual passenger and baggage volumes, (2) screener staffing model full-time equivalent allocation, and (3) number of check-points and screening lanes. Additionally, on the basis of available travel resources, we visited 7 of the 10 airports to observe airport screening operations, including any unique challenges faced by these airports. We surveyed the 28 airport operators who have applied to the SPP since its inception up until April 2012 to obtain their perspectives on the SPP application process, the advantages and disadvantages of participating in private or federal screening, and performance measures TSA uses to assess screeners. The 28 airports whose airport operators we surveyed include 16 from airports that were participating in the SPP at the time of the survey, 2 airports that withdrew their applications before TSA made a decision, 3 airports that were approved but never transitioned to the SPP because commercial airline service was discontinued at the airport, and 7 airports that initially applied from March 2009 through April 2012 (when we implemented our survey).A 29th airport, Bozeman Yellowstone International Airport, applied to the SPP for the first time in June 2012 and therefore was not included in our survey. Two airport operators did not respond to our survey. One of the 2 was an airport that had withdrawn its application to the SPP before a decision was made and a second an airport whose application was denied in January 2011 while the “clear and substantial advantage” application standard was in effect. We conducted two expert reviews of the survey with major aviation associations, and three survey pretests with airport operators. In addition to the 28 airport operators in our survey, we also interviewed the airport operators of Bozeman Yellowstone International Airport and the 5 non-SPP airports we visited to obtain their perspectives on the potential advantages and disadvantages of participating in the SPP. For this study, our focus is on assessing airport screening performance as opposed to individual screener performance. We assessed the aggregate of individual screener performance measures only to the extent that they reflect overall screening performance at airports. To determine the status of SPP applications, and airport operator, other stakeholders’, and TSA’s views on the advantages and disadvantages of participating in the SPP, we interviewed officials of TSA’s SPP Program Management Office (PMO) and reviewed the 15 SPP applications that had been submitted since fiscal year 2009, as well as TSA’s available decision memos on the applications. We also analyzed the results of our survey of SPP airport operators and operators of airports that have applied to the SPP. We also conducted semistructured interviews with TSA, contractor, and airport officials during our airport site visit interviews as well as interviewed aviation industry stakeholders to identify the advantages and disadvantages of using federal and nonfederal screeners. To determine the extent to which TSA has provided guidance to govern the SPP application process, we reviewed key statutes and policies to identify requirements related to the SPP. We also analyzed past and current SPP application forms and instructions, as well as interviewed TSA headquarters officials, to identify the requirements and process for applying to the SPP. As previously noted, we surveyed airport operators, which included operators of all 16 SPP airports and the 6 airports whose applications TSA denied for not establishing that transitioning to the SPP would provide a “clear and substantial advantage to TSA security operations,” to determine their perspectives on the SPP application process. Further, we interviewed airport officials at the 8 airports that have applied to the SPP since 2009, which includes the 6 airports that applied under TSA’s “clear and substantial advantage” standard, to obtain their perspectives on the clarity of the SPP application process. We also compared TSA’s application process and requirements against standards in Standards for Internal Control in the Federal Government which calls for an agency’s transactions and other significant events to be clearly documented and well defined. To determine the extent to which TSA assesses and monitors the performance of private and federal screeners, we reviewed TSA’s screener performance measurement documents, reports, and data systems. We also interviewed TSA headquarters officials knowledgeable about TSA’s performance management process to identify current screener performance measures. At the airports we visited, we observed screening operations to identify areas where screener performance could be assessed, and interviewed contractor, airport, and TSA officials to obtain their perspectives on the current set of performance measures. We reviewed TSA’s most recent set of performance measures in the Office of Security Operations Executive Scorecard as well as its previous set in the Management Objective Report to determine what, if any, improvements had been made. To do so, we evaluated the sets of measures against the nine key attributes of successful performance measures, which we developed in prior reports based on GAO’s prior efforts to examine agencies that were successful in implementing the performance measurement aspects of the Government Performance and Results Act (GPRA). the performance of federal and private screeners against standards in Standards for Internal Control in the Federal Government and best practices for performance management. GAO-03-143. nationally, from fiscal year 2009 through 2011. For our comparison, we focused on four performance measures: threat image projection (TIP) detection rates; recertification pass rates; aviation screening assessment program (ASAP) covert test results; and presence, advisement, communication, and execution (PACE) evaluation results. We selected these measures primarily based on our review of previous studies that compared screener performance of SPP and non-SPP airports as well as on our interviews with aviation security subject matter experts, including TSA’s FSD, SPP contractors, and airport and aviation industry stakeholders. We also selected performance measures for which TSA has, for the most part, consistently and systematically collected data for our study years. For some of the measures we selected, such as PACE evaluations, data were not available for all 3 years or all airports; nonetheless, we selected these measures because they represent integral aspects of screener performance. We explain these circumstances further when we present the data. To ensure the reliability of the performance measures data, we (1) interviewed TSA officials who use and maintain the data; (2) checked the data for missing information, outliers, and obvious errors; and (3) reviewed documentation for the relevant data systems to ensure the data’s integrity. On the basis of the steps we took, we found the data reliable for the purpose of providing summary statistics of screener performance for the four performance measures we analyzed. However, as noted earlier in this report, there are many factors, some of which cannot be controlled for, that may account for differences in screener performance; therefore, the differences we found in screener performance at SPP and non-SPP airports may not be attributed entirely to the use of either federal or private screeners. As of October 2012, 29 airports have applied for participation in the SPP since the inception of the program in 2004 (see table 3). As of October 2012, 16 airports are participating in the SPP and 6 airports were recently approved for participation (see figure 4 and table 4). TSA collects data on several other performance measures, but, for various reasons, they cannot be used to compare private and federal screener performance for the purposes of our review. Below, we discuss four variables occasionally cited by the airport officials and aviation stakeholders we interviewed as possible measures for comparing federal and private screening and the reasons we did not use them to compare private and federal screener performance. Wait times: A wait time is the total cycle time for a passenger to reach the advanced imaging technology (AIT) machine or walkthrough metal detector (whichever is available) from entering the queue. TSA officials at some airports collect these data by passing out a card to a passenger at the end of the line. We do not present passenger wait time data because we found that TSA’s policy for collecting wait times changed during the time period of our analyses and that these data were not collected in a consistent manner across all airports. Further, TSA officials noted that wait times are affected by a number of variables that TSA cannot control, such as airline flight schedules. Passenger throughput: Passenger throughput is the number of passengers screened in each of the screening lanes per hour. These data are collected automatically by the screening machines. TSA officials stated that they review this measure to ensure that passengers are not being screened too quickly, which may mean that screeners are not being thorough, or are screened too slowly, which may mean that screeners could be more efficient. According to TSA officials, passenger throughput is affected by a number of factors that are unique to individual airports, including technology, capacity and configuration of the checkpoint, type of traveler, and various factors related to the flight schedules. While officials noted that there is a goal for how many passengers should be screened per hour, a rate below this goal is not necessarily indicative of a problem, but could be due to a reduced passenger volume, as is likely during nonpeak travel hours. For example, at one of the airports we visited, there are few flights scheduled for the morning and evening, at which point passenger throughput is very low, and several flights scheduled around lunch- time, at which point the passenger throughput is relatively high. Human capital measures: We also considered reviewing human capital measures such as attrition, absenteeism, and injury rates. However, TSA’s Office of Human Capital does not collect these data for SPP airports because, according to these officials, maintaining information on human capital measures is the sole responsibility of the contractor. While the contractors collect and report this information to TSA, TSA does not validate the accuracy of the self-reported data. Further, TSA does not require that the contractors use the same human capital measures as TSA, and accordingly, differences may exist in how the metrics are defined and how the data are collected. Therefore, TSA cannot guarantee that a comparison of SPP and non- SPP airports on these human capital metrics would be an equal comparison. TSA officials also stated that they do not use human capital measures to compare SPP and non-SPP airports because these measures are affected by variables that are not within the control of TSA or the contractor. For example, some airports are located in areas that have a high cost of living, and as a result, it can be difficult to hire screeners because the screener salary may not be competitive there. “Red team” covert tests: In addition to ASAP tests, TSA’s Office of Inspections also conducts covert tests, the results of which are also classified. These covert tests are commonly referred to as red team tests, and are designed to identify potential vulnerabilities in TSA’s screening operations, as opposed to test screeners’ compliance with standard operating procedures. We have previously reported that an airport’s red team test results represent a snapshot in time and should not be considered a comprehensive measurement of any one airport’s performance or any individual airport’s performance. Further, while GAO analyzed red team tests in these reports, we determined, for reasons we cannot report here due to the sensitive security nature of the information, that it would not be appropriate to analyze the tests for the purpose of comparing screener performance at SPP and non- SPP airports. By adding measures to the Scorecard that addressed other non-security- related TSA priorities, TSA improved the set of performance measures it uses to asses screener performance. In the past, we have examined agencies that were successful in implementing the performance measurement aspects of the Government Performance and Results Act and concluded that these agencies exhibit certain key characteristics that it characterized as the nine key attributes of successful performance measures. While the Management Objective Report (MOR) addressed eight of the key attributes, it did not address balance because the set of performance measures did not address a variety of agency priorities. Balance among a set of performance measures is important because it helps to ensure that performance measurement efforts are not overemphasizing one or two priorities at the expense of others, which may keep managers from understanding the effectiveness of their program in supporting the agency’s overall missions and goals. Specifically, the MOR did not contain measures related to passenger satisfaction which, according to TSA’s Strategic Plan, is part of the agency’s mission. However, the Office of Security Operations (OSO) Executive Scorecard (Scorecard) includes passenger satisfaction measures, such as the number of negative and positive customer contacts made to the TSA Contact Center through e-mails or phone calls per 100,000 passengers screened through the airport, which were not previously included in the MOR. By adding measures related to passenger satisfaction to the Scorecard, TSA ensured balance in the set of performance measures the agency uses to assess screener performance and thereby ensured that its assessment of screening operation performance would be representative of a variety of program and agency goals (see table 5). Appendix VII: GAO Contact and Staff Acknowledgments Error! No text of specified style in document. In addition to the contact named above, Glenn Davis, Assistant Director, and Edith Sohna, Analyst-in-Charge, managed this assignment. Erin O’Brien and Michelle Woods made significant contributions to the work. Carl Barden, Stuart Kaufman, Stanley Kostyla, and Minette Richardson assisted with design and methodology. Tom Lombardi provided legal support. Linda Miller provided assistance in report preparation, and Lydia Araya made contributions to the graphics presented in the report.
|
TSA maintains a federal workforce to screen passengers and baggage at the majority of the nation's commercial airports, but also oversees a workforce of private screeners at airports who participate in the SPP. The SPP allows commercial airports to use private screeners, provided that the level of screening matches or exceeds that of federal screeners. In recent years, TSA's SPP has evolved to incorporate changes in policy and federal law, prompting enhanced interest in measuring screener performance. GAO was asked to examine the (1) status of SPP applications and airport operators', aviation stakeholders', and TSA's reported advantages and disadvantages of participating in the SPP; (2) extent to which TSA has provided airports guidance to govern the SPP application process; and (3) extent to which TSA assesses and monitors the performance of private and federal screeners. GAO surveyed 28 airport operators that had applied to the SPP as of April 2012, and interviewed 5 airport operators who have not applied and 1 airport operator who applied to the SPP after GAO's survey. Although not generalizable, these interviews provided insights. GAO also analyzed screener performance data from fiscal years 2009-2011. This is a public version of a sensitive report that GAO issued in November 2012. Information that TSA deemed sensitive has been redacted. Since implementation of the Screening Partnership Program (SPP) in 2004, 29 airports have applied to the program, citing various advantages and relatively few disadvantages. Of the 25 approved, 16 are participating in the program, 6 are currently in the contractor procurement process, and the remainder withdrew from participation because their commercial airline services were discontinued. In 2011, the Transportation Security Administration (TSA) denied applications for 6 airports because, according to TSA officials, the airports did not demonstrate that participation in the program would "provide a clear and substantial advantage to TSA security operations." After enactment of the Federal Aviation Administration Modernization and Reform Act of 2012 (FAA Modernization Act) in February 2012, TSA revised its SPP application, removing the "clear and substantial advantage" question. Four of the 6 airports that had been denied in 2011 later reapplied and were approved. In GAO's survey and in interviews with airport operators (of SPP and non-SPP airports) and aviation stakeholders, improved customer service and increased staffing flexibilities were most commonly cited as advantages or potential advantages of the SPP. Individual Federal Security Directors we interviewed cited reduced involvement in human resource management as an advantage; however, TSA generally remains neutral regarding the SPP. Few disadvantages were cited; however, some airport operators cited satisfaction with federal screeners and concerns with potential disruption from the transition to private screening services. TSA has developed some resources to assist SPP applicants; however, it has not provided guidance to assist airports applying to the program. Consistent with the FAA Modernization Act, TSA's revised SPP application requested that applicants provide information to assist TSA in determining if their participation in the SPP would compromise security or detrimentally affect the cost-efficiency or screening effectiveness of passengers and property at their airport. TSA also developed responses to frequently asked questions and has expressed a willingness to assist airports that need it. However, TSA has not issued guidance to assist airports with completing applications and information on how the agency will assess them. Three of five airport operators who applied using the current application stated that additional guidance is needed to better understand how to respond to the new application questions. Developing guidance could better position airports to evaluate whether they are good candidates for the SPP. TSA recently improved its screener performance measures, but could benefit from monitoring private versus federal screener performance. In April 2012, TSA added measures to ensure that the set of measures it uses to assess screener performance at private and federal airports better addresses its airport screening strategic goals and mission. However, TSA does not monitor private screener performance separately from federal screener performance. Instead, TSA conducts efforts to monitor screener performance at individual SPP airports, but these efforts do not provide information on SPP performance as a whole or across years, which makes it difficult to identify program trends. A mechanism to consistently monitor SPP versus non-SPP performance would better position TSA to ensure that the level of screening services and protection provided at SPP airports continues to match or exceed the level provided at non-SPP airports, thereby ensuring that SPP airports are operating as intended. GAO recommends that the TSA Administrator develop guidance for SPP applicants and a mechanism to monitor private versus federal screener performance. TSA concurred with the recommendations.
|
DOD’s Real Property Management Program is governed by statute and DOD regulations, directives, and instructions that establish real property accountability and financial reporting requirements. These laws, regulations, directives, and instructions require DOD and the military departments to maintain a number of data elements about their facilities to help ensure efficient property management which, among other things, could help identify potential facility consolidation opportunities. Logistics and the Secretaries of the military departments. Specifically, the directive assigns overall responsibility and oversight of DOD real property to the Under Secretary of Defense for Acquisition, Technology and Logistics, but assigns specific responsibilities for real property management to the Secretaries of the three military departments, including implementing policies and programs to acquire, manage, and dispose of real property. Accordingly, each of the military departments has developed its own procedures and guidance for managing its infrastructure. Some of the key guidance used by the military departments for managing real property includes Army Regulation 405-70; Naval Facilities Engineering Command P-78; and Air Force Policy Directive 32- 10. Military department guidance requires, among other things, that real property records be accurate and be managed efficiently and economically. It also requires the military departments to maintain a complete and accurate real property inventory with up-to-date information, to annually certify that the real property inventory has been reconciled, and to ensure that all real property holdings under the military departments’ control are being used to the maximum extent possible consistent with both peacetime and mobilization requirements. In managing the real property under their control, the Secretaries of the military departments are responsible for implementing real property policies and programs to, among other things, hold or make plans to obtain the land and facilities they need for their own missions and for other DOD components’ missions that are supported by the military departments’ real property. Additionally, the military departments are required to (1) budget for and financially manage so as to meet their own real property requirements; (2) establish and maintain accurate inventory to account for their land and facilities; and (3) maintain a program monitoring the use of real property to ensure that all holdings under their control are being used to the maximum extent possible consistent with both peacetime and mobilization requirements. Generally, the military departments rely on the installations to manage and monitor the utilization of facilities. According to OSD guidance, installations are required to conduct inventories for each real property asset every 5 years except for those real property assets designated as historic, which are to be reviewed and physically inventoried every 3 years. According to DOD Instruction 4165.70, the military departments’ real property administrators are accountable for maintaining a current inventory count of the military departments’ facilities and up-to-date information regarding, among other things, the status, condition, utilization, present value, and remaining useful life of each real property asset. Inventory counts and associated information should be current as of the last day of each fiscal year. In addition, DOD Instruction 4165.70 requires the DOD components to periodically review their real property holdings, both land and facilities, to identify unneeded and underutilized property. Underutilized property represents assets that are needed to meet current or projected defense requirements, but are not currently utilized to the maximum extent possible. Such assets can be considered for temporary use by other DOD entities, other federal agencies, state and local governments, or private entities which is also referred to as outgranting. DOD guidance establishes the types of agreements that are used to document the support that military installations provide to their tenants. See 40 U.S.C. § 102 (3) and Department of Defense Directive 4165.06, Real Property. external factors that may affect future disposal efforts. DOD concurred with this recommendation and stated that it would work with the military departments to continue to develop and implement the most effective and efficient methods to eliminate excess facilities and capacity, but did not provide any details or specific time frames for these efforts. GSA has key leadership responsibilities related to real property management for the federal government. First, GSA is authorized by law to acquire, manage, utilize, and dispose of real property for most federal agencies, a function that is commonly referred to as the landlord role. This function is performed by GSA’s Public Buildings Service; GSA has an inventory of about 9,000 government-owned or government-leased facilities. GSA is responsible for managing the life cycle of federally owned assets, including eventually disposing of such properties and entering into, renewing, and terminating contracts for leased properties. Second, in a government-wide policy role, GSA sets real property management policy for the federal government as a whole. GSA’s Office of Government-wide Policy is tasked, among other things, to identify, evaluate, and promote best practices to improve efficiency of management processes. In this policy role, GSA also supports the Federal Real Property Council by providing oversight guidance, publishing performance measures, and maintaining the Federal Real Property Profile (FRPP) database. Additionally, the Freeze the Footprint policy assigns GSA leadership responsibilities, directing GSA to consult with other agencies on promoting full implementation of the policy, including how to use technology and space management to consolidate, increase occupancy rates in facilities, and eliminate lease arrangements that are not cost or space effective. DOD and military department guidance identify the real estate instruments used to issue outgrants, and—depending on the type of non- DOD tenant and type of facility occupied—the appropriate instances in which to use each real estate instrument. The military installations can use a variety of real estate instruments to issue outgrants. Leases grant a nonfederal entity exclusive possession of real property for a specified term in return for rent or other consideration. For example, an installation may grant a lease for a credit union to build a branch office. Enhanced Use Leases (EUL) refer to more complex leases into which the military departments may enter. EULs generally provide for in-kind consideration, and some EULs involve complex agreements and long terms. For example, an EUL might provide for a 50-year lease of military land to a private developer that would be expected to construct office or other commercial buildings on the land and then rent the facilities to private-sector tenants for profit. Consideration refers to cash or in-kind payment by the lessee in exchange for the lease. In the context of DOD’s general leasing authority, payment in kind may take the form of maintenance, protection, alteration, improvement, or restoration of property or facilities, among other things. Licenses grant any entity the use of space at an installation for a specific purpose generally in return for rent or other in-kind consideration. For example, an installation may grant a license to a YMCA program for carrying out activities for youths. Permits are licenses granted to non-DOD federal agencies generally in return for reimbursement of direct and indirect costs, as required by DOD guidance. Examples of direct and indirect reimbursement for costs include utilities, maintenance, and other services. Easements grant any entity a right to use or pass over parcels of land in specific ways; for example, to install and run utility lines across an installation, or to build roads, streets, or railroad tracks. Officials at all seven of the installations that we visited reported selecting the appropriate real estate instrument based on the type of non-DOD entity occupying space at the installation, the type of facility, and the proposed use of the asset. The type of entity can include federal agencies other than DOD, state and local governments, and nongovernmental and private organizations, while the type of facility can include buildings, structures, and linear structures. Table 1 below illustrates the relationship that exists among the type of non-DOD entity, the type of real estate instrument, and the type of real property asset. All seven of the installations we visited had established outgrants with at least one non-DOD federal agency as well as with other DOD entities, state and local governments, and private organizations to varying degrees. For example, these installations had established leases with public school districts, credit unions, and nonprofit organizations and had easements with local utility companies and state transportation agencies. None of the installations we visited had any EULs in place with nonfederal entities. DOD and military department guidance also outline several types of support agreements that installations can use to document specific provisions of their agreements with tenant organizations. The support agreements used at the installations that we visited include the following: DD Form 1144: This form is used in instances where there is a need to document recurring reimbursable support that an installation provides to a federal agency, such as utilities, refuse disposal, and other services. Memorandums of Understanding: These document areas of general understanding that do not involve reimbursement, such as expiration dates and procedures to mediate disputes. Memorandums of Agreement: These document specific terms and responsibilities for a single reimbursable purchase, nonrecurring reimbursable support, or nonreimbursable support, and include financial provisions, such as billing and payment terms. While DOD and military service guidance provide the tools for installations to issue several types of outgrants, officials must first determine the viability and desirability of bringing a tenant onto the base. Prior to granting the use of space to a non-DOD entity, officials at the installations we visited reported considering several factors. These factors generally fit into three categories: (1) general factors, (2) mission-related factors, and (3) local factors. General factors include considerations related to the availability of space, mission-related factors take into account the effect that a proposed tenant would have on the ability of the installation to perform its mission, and local factors include unique circumstances that exist on a particular installation. The factors discussed below represent the considerations identified by officials at the seven installations that we visited, but are not an exhaustive list of all the possible factors that an installation could consider in granting the use of space to a non-DOD entity. One of the general factors that officials at all seven installations we visited reported considering is whether they have space available that is suitable for the tenant. In making this determination, installation officials considered whether the installation had the amount and type of space available to support the proposed activity that the tenant would be bringing onto the installation. If suitable space is identified, a second factor that officials at all seven installations reported considering was whether the installation had competing interests for real property assets that are available. Generally, installations are required to prioritize the order in which non-DOD entities are granted space. DOD Instruction 4165.70 provides the priorities for considering requests from DOD or non- DOD entities to use unutilized or underutilized space. According to the instruction, an installation’s first priority is DOD entities. Assuming no DOD organizations have a need, the next priority for outgrants is federal agencies whose mission on the installation is closely associated with the installation’s national defense mission. Third, installations should provide space to other federal agencies above local government or private entities. Fourth, installations must prioritize nonfederal government entities, such as state and municipal agencies, over private organizations. Finally, in the event that there are no competing interests, installations may grant space to private organizations. One of the mission-related factors officials at all seven of the installations we visited reported considering is whether the installation needs to allow unutilized or underutilized space to remain vacant in order to meet future DOD needs in support of its mission. Installation officials estimated their facility needs to address anticipated changes in DOD’s force structure or mission such as needing more facilities to move or house service members and supporting civilian employees in the event of a new contingency, including the need to mobilize reserves. In this instance, granting space to a tenant may preclude the installation from accommodating fluctuations in its force. A related factor that officials at six of the seven installations we visited reported considering is whether the requested space conforms to the Installation Master Plan, which contains the installation’s planned layout of its assets to support the mission. Officials stated that any space that is granted to non-DOD entities cannot be used for a purpose that conflicts with the Master Plan’s layout of the installation’s infrastructure. For example, installations will not grant space to a tenant that requests industrial space in an area that the Master Plan has designated for residential use. Another mission-related factor officials at five of the seven installations we visited reported considering is whether the tenant’s presence will negatively affect the installation’s required level of security. Installations have different security measures with varying degrees of stringency, in part to safeguard the integrity of the mission. For example, Kirtland Air Force Base, New Mexico, controls civilian access to its premises in part to safeguard the sensitive nature of some material and information that is housed within its premises, including some work that is carried out by the Department of Energy. In this case, officials would have to consider whether having a non-DOD tenant would increase the number of civilians on the base, which could in turn create additional vulnerabilities that would not be mitigated through existing security measures. Another mission-related factor that officials at all of the installations we visited reported considering when bringing additional tenants onto the base is the effect on the installation’s infrastructure. Specifically, officials said they considered whether the installation’s existing infrastructure, such as the electrical distribution system, sewage lines, water pipes, and roads can adequately accommodate additional tenants. For example, officials with whom we spoke at Marine Corps Base Quantico, Virginia, explained that the installation’s existing roads could not accommodate the increase in traffic volume that resulted from an increase in personnel inside the Federal Bureau of Investigation compound. To mitigate this problem, Marine Corps officials worked with the bureau and the Department of Justice to secure funding for the construction of additional roads to accommodate the added traffic on the installation. Officials that we spoke with at three of the seven installations we visited mentioned that local topography can be a factor that is considered when evaluating whether to grant space to a non-DOD tenant. For example, according to officials at Joint Base Elmendorf-Richardson, Alaska, partly because of the presence of mountains on the boundary of the installation and its proximity to a significant amount of marshlands—and environmental regulations related to these—the installation has limited opportunity to expand, which limits its ability to bring entities onto the base. Officials at all seven of the installations we visited stated that the effect that tenants may have on the local environment must be considered. For example, officials at Naval Base Coronado, California stated that there are a large number of endangered species present on the installation, which requires the completion of an environmental assessment prior to authorizing additional tenants coming onto the installation. Finally, some officials also mentioned that there are local agreements that are considered. For example, Kirtland Air Force Base must consider the local effect that existing regional and federal agreements with Native American groups may have on the installation’s ability to grant space to non-DOD tenants. Several limitations can affect a military installation’s ability to bring non- DOD tenants onto an installation. First, the installation would have to have available space that is suitable for a tenant’s needs to successfully bring a potential tenant onto a base. Officials at all seven of the military installations we visited cited limitations in accommodating space requests from potential tenants due to a lack of vacant space that aligns with the tenant’s request, such as the amount of space or type of space needed, or vacant space that is not in suitable condition. Specifically, officials at the seven installations we visited reported that they were either short on suitable space or that the vacant space they did have was in poor condition, or both. Officials at one installation said that because the space may not be in good condition, the need for renovations may limit the desirability of the space for potential tenants. A second limitation that can affect the ability of an installation to bring non-DOD tenants onto the installation is that the process is reactive in nature. Specifically, officials from OSD and the services stated that the process of providing space to non-DOD federal agencies generally starts when potential tenants approach the installations to request space and is usually not initiated by the services or installations in an effort to find tenants. Officials at six of the seven installations we visited stated that they did not actively pursue opportunities to bring non-DOD federal agencies onto the installation, but reacted to space requests initiated by the potential tenants. At one installation—Fort Bliss, Texas—officials stated that previous installation commanders pursued potential tenants with compatible missions using informal networking and meetings. According to the officials, this approach is not currently needed because new missions assigned to Fort Bliss have increased use of space at the installation. Moreover, installation officials reported a lack of non-DOD federal agency requests for space. While all the installations we visited had non-DOD federal tenants as of March 2015, officials at four of the seven installations stated that they receive few new requests for space from non-DOD federal agencies. In some cases, this may stem from a limited demand for space in particular areas. For example, at Eielson Air Force Base outside of Fairbanks, Alaska, officials reported that the base received few space requests because there are few non-DOD federal agencies in the local area. Also, the base is located approximately 20 miles from Fairbanks, which installation officials said may not be desirable for potential tenants. In other cases, this may be the result of a lack of information sharing among agencies that may have a need for space. For example, at each of the installations we visited, none had shared information routinely with other federal agencies or GSA concerning available space at the installation. For example, none of the installations had contacted or were contacted by GSA, which has a key role in acquiring real property for the federal government and would have knowledge of the space needs of multiple federal agencies regionally or locally. We discuss this issue in greater detail later in this report. Finally, officials at each military installation we visited also reported that limitations specific to their location could affect their ability to bring non- DOD tenants onto the installation. For example, Joint Base Elmendorf- Richardson officials explained that because of certain agreements that affect the rights to land on the installation, the installation must exercise care when creating an outgrant to ensure that the outgrant agreement does not conflict with the preexisting agreements. According to officials at Kirtland Air Force Base, New Mexico, when Kirtland was expanded in 1971 to incorporate two nearby installations, the new boundaries of the installation encompassed land that remains under the control of other federal entities. Consequently, Kirtland does not have the unilateral authority to authorize the use of these lands or the facilities located on them. While there are limitations to bringing tenants onto military installations, according to installation officials, both the installation and tenant agency can benefit when a match can be made between an installation’s available space and the tenant agency’s needs. Specifically, an installation can receive benefits in the form of services provided by the tenant agency. For example, officials at Kirtland Air Force Base, New Mexico, said that the Federal Aviation Administration provides air traffic control services to the base, and officials at Camp Pendleton, California, said the U.S. Coast Guard presence provides offshore security to the installation. In addition, installations can receive financial benefits from having non-DOD federal agency tenants on the installation by avoiding utility and maintenance costs for tenant-occupied facilities that would have otherwise been incurred. Officials at six of the seven installations we visited noted that the reimbursement of direct and indirect costs for these facilities can provide a financial benefit to the installations. Non-DOD federal agencies can also benefit from using space on military installations. For example, non-DOD federal agencies could receive a financial benefit from being located on a military installation due to differences in costs charged by DOD when compared with the costs of commercial leases. Specifically, a DOD instruction allows military installations to collect reimbursements from non-DOD federal agencies for direct and indirect costs such as utilities, maintenance, and services provided, but generally do not allow installations to collect additional rent beyond cost recovery. According to installation officials at all seven installation we visited, the installations did not collect more than the reimbursements for direct and indirect costs, and did not charge any additional rent beyond cost recovery, which represented a savings to the tenant agency. In addition, there are occasions where the non-DOD federal tenant receives nonfinancial benefits from being located on a military installation. For example, the Department of Energy receives the benefit of the installation security for its facilities located on Kirtland Air Force Base, New Mexico, which represents potential cost avoidance for the department. Finally, both the installation and the agency can benefit from having the non-DOD federal agency on the military installation to accomplish a shared mission. For example, the Coast Guard recently became a tenant at Joint Base Elmendorf-Richardson, enabling both the installation and the Coast Guard to better accomplish their search and rescue mission. Specifically, the installation is responsible for the air portion of the mission and the Coast Guard is responsible for the sea portion of the mission. Being located on the same installation enables them to coordinate training in preparation to execute the search and rescue mission. Despite the benefits to DOD and non-DOD federal agencies, routine information sharing does not occur between DOD and GSA concerning opportunities to move non-DOD federal agencies onto military installations to make better use of unutilized and underutilized facilities, although GSA may have information on agencies near an installation needing space. Government-wide efforts continue to focus on the need to better utilize existing real property assets in order to promote efficiency and leverage government resources, which can be facilitated by coordination between federal agencies. The 2015 National Strategy for Real Property states that execution of opportunities to improve space utilization is one way in which the federal government can improve its management and use of federal assets to maximize the use of scarce budgetary resources. The strategy includes a focus on reducing and promoting more efficient use of the federal office and warehouse footprints—property categories in which DOD controls approximately 35 percent and 48 percent of the federal space, respectively. One way agencies can become better stewards of government resources is through enhancing and sustaining collaboration and coordination, which can be accomplished through various practices, including operating across agency boundaries through compatible policies, procedures, and frequent communication. Frequent communication would encourage the sharing of information that could be used to better utilize facilities on military installations. For example, in July 2012 we concluded that coordinated efforts at the local and regional level could enhance information sharing and facilitate increased utilization of federal real property, which could in turn result in cost savings or avoidance through the reduction of leased space. As part of its role of acquiring, managing, and utilizing federal real property, according to GSA, it provides workspace to federal agencies at the best value for the American taxpayer by leveraging limited government resources and proactively working with agencies to maximize use of space. GSA works with non-DOD federal agencies to help them seek and obtain space. Non-DOD federal agency clients can begin this process by calling a regional GSA office and providing information on their program and mission requirements, such as the required geographic area, estimated total square footage needed, and how long the space is needed, among other things. GSA will then review that information, work with the agency to clarify and refine the requirements as necessary, and search within the defined geographic area for suitable federally controlled space—either owned or leased. According to GSA, placing a federal agency in owned space is generally a better long-term solution and provides cost-savings over time. According to GSA officials, the search for suitable federally controlled space includes a check of GSA-owned and GSA-leased real property. If there is no suitable GSA space available, GSA will then seek space in United States Postal Service facilities, per a memorandum of agreement between the two agencies and the Federal Management Regulation, before helping its clients to acquire space through a commercial lease. Even though DOD holds over 60 percent of all federal real property and GSA may have information on agencies near an installation needing space, according to GSA officials, the process to seek and assign space to its non-DOD federal agency clients does not include sharing this information with DOD or other federal landholding agencies, with the exception of the Postal Service. Specifically, the GSA officials with whom we spoke reported that generally regional GSA offices do not communicate with military installations to identify whether there may be suitable vacant space in the installation-level real property inventories, which is information maintained by the installations. The officials also stated that if a client were to express interest in space on a military installation, GSA would direct the client to contact the installation directly and would have little to no involvement with the installation concerning the details of any agreement between DOD and the non-DOD federal agency for the use of space on a military installation. For example, the officials identified one instance where GSA provided the Department of State with a point of contact in the Army so that the Department of State could inquire directly with the Army concerning the potential for using training space on a local installation. The GSA officials with whom we spoke said that a primary reason GSA does not routinely coordinate with DOD concerning the availability of unutilized and underutilized space is that they assume that space in DOD-owned facilities typically would not meet the needs of GSA’s non- DOD federal agency clients because installation security requirements and locations are not likely to be compatible with the non-DOD federal agency missions. However, DOD reports having non-DOD federal tenants on many of its installations, although such factors can limit some non- DOD federal agencies from being located on a military installation in some circumstances. Therefore, there are instances when a non-DOD federal agency’s space needs can be met on military installations. Further, GSA’s assumption that agencies’ needs cannot be met on a military installation may preemptively limit options available to the agencies for which GSA is working to find space and thus the non-DOD federal agency tenants do not receive full information on potential facilities located on the installations. DOD also does not routinely share information with GSA or other non- DOD federal agencies when space is available on military installations. In addition to the government-wide guidance to better utilize federal property, DOD Instruction 4165.70 directs the Secretaries of the military departments to maintain a program that monitors the use of real property to ensure that it is being used to the maximum extent possible consistent with both peacetime and mobilization requirements. We found that military installations do not routinely share information with GSA or other non-DOD federal agencies when space is available in part because, as stated before, military installations generally wait for non-DOD federal agencies to inquire about available space. DOD officials at the OSD, service, and installation levels said that they do not conduct outreach to communicate information regarding unutilized and underutilized space on military installations in part because the installations primarily focus on supporting missions within DOD, not other non-DOD federal agencies. However, when there is available space on military installations that is not currently used by other DOD entities, DOD’s process to wait for agencies to approach installations does not assist the installation in utilizing their space to the maximum extent possible consistent with military requirements as required by DOD policy. Further, department-level and installation-level officials said they had not interacted or shared information with GSA concerning the availability of space on installations that might be suitable for non-DOD federal agencies that are working with GSA, including providing details about installation-level real property inventories, because DOD’s real property management process does not require coordination with GSA until the property has been declared excess. Although coordination is not required, if space is available but not currently in use, it would likely benefit the installation to have a tenant use the space rather than allowing the space to remain unutilized or underutilized for the following reasons. As discussed earlier, DOD guidance directs the military departments to utilize their space to the maximum extent possible consistent with military requirements. Also, because a tenant offsets some direct and indirect costs, such as utilities and maintenance, in a constrained budget environment installations can keep facilities in good condition that would otherwise be unutilized or underutilized. Officials at the OSD, service, and installation levels told us that actively pursuing potential tenants would be an administrative burden on the installations, especially if there is not a significant amount of available space on the installation. However, there are ways that DOD could accomplish this without significantly increasing the administrative burden on the installation. For example, DOD does not provide regional or local contacts or information on the process for requesting space for installations to GSA or other non-DOD federal agencies. Each installation we visited already had an established process for evaluating requests for space from non-DOD entities. However, installation officials at some of the locations we visited said that non-DOD federal agencies are not always aware of the process or the proper organization at the installation to which requests should be submitted. For example, some agencies route their requests to the wrong organization at the installation, which can lead to delays in processing the request. Further, GSA officials told us that not knowing whom to contact locally or regionally for military installations is one factor that inhibits information sharing between GSA and DOD, including information about non-DOD federal agencies requesting space through GSA. Without actions to share information at the regional and local level, GSA offices working with non-DOD federal agencies may risk missing opportunities for clients to use available underutilized or unused federal space at lower cost than commercial leases. In addition, DOD may be missing opportunities to leverage resources with GSA to enhance utilization of its unutilized and underutilized facilities and reduce costs associated with maintaining these facilities. DOD and the federal government as a whole face challenges in continuing to operate and maintain unutilized and underutilized facilities that use valuable resources that could potentially be eliminated from the budget or allocated to other uses. Coordinated efforts among federal agencies, as called for in the 2015 National Strategy for Real Property, could enhance utilization of federal real property. At this time, DOD and GSA do not share information concerning unutilized and underutilized space at military installations or potential clients working with GSA that could facilitate the use of available space by non-DOD federal agencies. Without such information sharing, DOD may be missing opportunities for installations to maximize the use of space and reduce costs, and GSA risks missing opportunities for some of its clients to reduce or avoid rental costs altogether and to reduce their reliance on commercial leases. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Energy, Installations, and Environment, in collaboration with the Administrator of GSA, to identify and implement actions to enable and enhance routine information sharing between DOD and GSA about the utilization of facilities on military installations. Such actions should include establishing recurring processes to (1) share information about non-DOD federal agencies seeking workspace, and (2) ensure that GSA and DOD organizations are aware of the appropriate points of contacts within their organizations at the regional and local levels. We provided a draft of this report to DOD and GSA for official review and comment. We received written comments from both agencies. In its comments, DOD concurred with our recommendation and stated that it would be supportive of GSA’s efforts to share information about the non-DOD federal agencies seeking workspace. It would work with GSA to ensure that GSA and DOD organizations are aware of the appropriate points of contacts within their organizations at the regional and local level. In its comments, GSA concurred with our recommendation and stated that it agreed with our findings and would take actions to implement our recommendation. It further stated considering DOD military installations as potential housing solutions prior to going to the open market will help ensure that government-owned assets are used to capacity. GSA also outlined four specific actions to address our recommendation: (1) convene a working group with DOD real property officials to understand DOD’s national land holding portfolio and identify unutilized and underutilized space at military installations; (2) collaborate with DOD to establish a shared real property inventory database; (3) review GSA’s inventory of customer agencies’ current and future needs; and (4) revise the Federal Management Regulations to include DOD in GSA’s priorities for housing federal agencies. We agree that the actions outlined by DOD and GSA represent a positive step toward ensuring that government-owned assets are used to capacity. DOD’s and GSA’s official comments are reprinted in appendix II and appendix III, respectively. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Administrator, General Services Administration. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To evaluate the potential for and obstacles to federal agencies other than Department of Defense (DOD) organizations relocating onto military installations to save costs and enhance security, this report identifies (1) what options, if any, are available for DOD to allow non-DOD entities, including federal government agencies, to use unutilized (vacant) and underutilized (partially vacant) space on military installations, and what factors DOD considers when considering exercising each option; (2) any limitations and benefits of bringing non-DOD federal agencies onto installations; and (3) the extent to which DOD and other federal agencies coordinate to do so. To determine what options are available and factors to consider for DOD to allow non-DOD entities, including federal government agencies, to use unutilized and underutilized space on military installations, we reviewed applicable DOD and military department guidance to identify the circumstances under which non-DOD tenants are allowed to utilize space on military installations, the order of priority for considering non-DOD tenants for use of space, the types of real estate instruments used to grant non-DOD entities use of space on military installations, and the documents used to record the terms and conditions associated with the use of space on military installations. In addition, we interviewed responsible officials within the Office of the Secretary of Defense (OSD) and the military department headquarters to determine their roles in bringing non-DOD tenants onto military installations and identify the factors that are considered when determining whether to grant a non- DOD entity use of space on a military installation. Finally, we selected seven installations to visit to identify what non-DOD entities are present on installations, the process the installations used to determine whether to grant non-DOD entities access to space on the installations, and the factors that installations considered when determining whether to grant While our observations from these non-DOD entities access to space.installations are not generalizable, the observations do provide context concerning non-DOD entities using space on military installations. The primary factor we considered in selecting the installations we visited was the number of real property assets that were identified as being used by non-DOD federal agencies in DOD’s Real Property Assets Database (RPAD) at the end of fiscal year 2013. While we have previously reported on inaccurate and incomplete utilization data in the database, we determined that the RPAD data were sufficiently reliable for the purposes of selecting installations to visit. The secondary factor that we considered, in order to respond to a consideration in the mandate, was whether the installation supported DOD’s Arctic mission. Specifically, the National Defense Authorization Act for Fiscal Year 2014 included a provision for GAO to consider the potential for and obstacles to consolidation of federal tenants on installations that support Arctic missions, focusing on federal entities with homeland security, defense, international trade, commerce, and other national security functions that are compatible with the missions of military installations, or can be used to protect national interests in the Arctic region. Using these factors, we selected the installation from each military service that had the greatest number of real property assets identified as being used by non-DOD federal agencies, two installations that supported DOD’s Arctic mission, and two installations that had a relatively small number of real property assets identified as being used by non-DOD federal agencies. Our selected installations included Kirtland Air Force Base, New Mexico; Fort Bliss, Texas; Naval Base Coronado, California; Marine Corps Base Quantico, Virginia; Joint Base Elmendorf-Richardson, Alaska (Arctic mission); Eielson Air Force Base, Alaska (Arctic mission and few non-DOD federal agencies); and Marine Corps Base Camp Pendleton, California (few non-DOD federal agencies). To identify the limitations and benefits of bringing non-DOD federal agencies onto installations, we reviewed applicable DOD and military department guidance, including regulations and instructions, to determine whether any procedures are identified for promoting the use of unutilized or underutilized space by non-DOD federal agencies and whether any limitations and benefits are identified. In addition, we interviewed responsible OSD, military department headquarters, and installation officials to obtain their perspectives concerning the process by which non- DOD entities are provided space on DOD installations as well as the limitations and benefits that exist to allowing non-DOD federal agencies to use space on military installations. To determine the extent to which DOD and other federal agencies coordinate to better use unutilized and underutilized space on military installations, we reviewed General Services Administration (GSA) guidance on its process to seek and assign space to its clients and interviewed cognizant GSA officials concerning that process, to determine whether it includes coordination with landholding agencies such as DOD. We also interviewed responsible OSD, military department headquarters, and installation officials to obtain their perspectives on coordination between DOD and GSA. We compared that information with criteria on practices to enhance collaboration among federal agencies that we identified previously. We conducted this performance audit from March 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Gina Hoffman, Assistant Director; Alberto Leff; Kelly Liptan; Tamiya Lunsford; Michael Silver; Erik Wilkins- McKee; Michael Willems; and John Wren made key contributions to this report. High Risk: 2015 Update. GAO-15-290. Washington, D.C.: February 11, 2015. Defense Infrastructure: DOD Needs to Improve Its Efforts to Identify Unutilized and Underutilized Facilities. GAO-14-538. Washington, D.C.: September 8, 2014. Federal Real Property: Better Guidance and More Reliable Data Needed to Improve Management. GAO-14-757T. Washington, D.C.: July 29, 2014 Defense Infrastructure: DOD’s Excess Capacity Estimating Methods Have Limitations. GAO-13-535. Washington, D.C.: June 20, 2013. Federal Real Property: Excess and Underutilized Property Is an Ongoing Challenge. GAO-13-573T. Washington, D.C.: April 25, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Federal Real Property: Improved Data and a National Strategy Needed to Better Manage Excess and Underutilized Property. GAO-12-958T. Washington, D.C.: August 6, 2012. Federal Real Property: Strategic Partnerships and Local Coordination Could Help Agencies Better Utilize Space. GAO-12-779. Washington, D.C.: July 25, 2012. Federal Real Property: National Strategy and Better Data Needed to Improve Management of Excess and Underutilized Property. GAO-12-645. Washington, D.C.: June 20, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Defense Infrastructure: The Enhanced Use Lease Program Requires Management Attention. GAO-11-574. Washington, D.C.: June 30, 2011.
|
GAO has designated DOD's Support Infrastructure Management as a high-risk area in part due to challenges in reducing excess infrastructure. DOD installations can establish agreements to allow entities such as non-DOD federal agencies and private entities to use property on DOD installations that are unutilized or underutilized. DOD reports that, as of the end of fiscal year 2013, its real property portfolio consisted of more than 562,000 facilities with an estimated value of $850 billion. The National Defense Authorization Act for Fiscal Year 2014 included a provision that GAO review the potential for relocating federal government tenants onto military installations. This report identifies (1) available options for DOD to allow non-DOD entities to use unutilized and underutilized space on military installations, and what factors DOD considers for each option; (2) any limitations and benefits of bringing non-DOD federal tenants onto military installations, and (3) the extent to which DOD and other federal agencies coordinate to do so. GAO evaluated DOD and military service guidance; visited selected installations having non-DOD tenants, including two that support the Arctic mission; and interviewed cognizant officials. Department of Defense (DOD) guidance outlines options for granting the use of unutilized (vacant) and underutilized (partially vacant) space on military installations to non-DOD entities, such as other federal agencies, and installations consider several factors when contemplating such grants. For example, DOD and military department guidance identifies the real estate instruments, such as leases and licenses that are to be used to issue grants to non-DOD entities. All seven of the installations that GAO visited reported using this guidance to select the appropriate instrument based on the type of non-DOD entity, type of facility, and proposed use of the asset. For example, installations selected permits as the appropriate real estate instrument when issuing grants to a non-DOD federal agency as outlined in DOD and military department guidance. Prior to granting the use of space to a non-DOD entity, officials at installations reported considering several factors, including the availability of space, effect on the mission, and factors unique to the installation. In instances where there are competing interests for space, officials reported considering priorities set forth in DOD guidance for assigning available space on the installation. Officials also reported considering whether the tenant could potentially have a negative effect on the installation's ability to comply with any regulations, such as preserving protected habitats. DOD faces both limitations and benefits from moving non-DOD agencies onto installations. Limitations such as the availability of suitable space affect DOD's ability to bring non-DOD federal agencies onto military installations. For example, officials at all seven of the installations GAO visited reported a lack of vacant space or vacant space that is usable, which limited their ability to accommodate space requests. However, when a match can be made between an installation's available space and a potential tenant agency's needs, both parties can benefit. For example, installations can potentially benefit through the avoidance of direct and indirect costs, such as the cost for utilities and maintenance incurred for unused or underutilized space. Non-DOD federal agencies can save costs on commercial leases because DOD charges for use of space by other federal entities on a cost-recovery basis. Despite the potential benefits, routine information sharing does not occur between DOD and the General Services Administration (GSA) concerning opportunities to move non-DOD federal agencies onto military installations. Specifically, when GSA is working to satisfy the space needs of its clients, it does not routinely contact DOD installations to inquire whether space might be available. DOD, on the other hand, waits for non-DOD federal agencies to inquire whether space is available and does not generally reach out to GSA or agencies that may be interested in space. Without taking actions to share information, GSA offices working with non-DOD federal agencies to find them space may risk missing opportunities for their clients to reduce or avoid costs. In addition, both GSA and DOD may miss opportunities to leverage resources and enhance utilization of federal real property. GAO recommends that DOD and GSA collaborate to enhance routine information sharing concerning non-DOD federal agencies seeking workspace at military installations. DOD and GSA concurred and agreed to take action to help ensure that government-owned assets are used to capacity.
|
IRS has a multistage process that governs audits and settlements of disputes over the additional taxes recommended. In an audit, an auditor, usually from IRS’ Examination Division, is to review a taxpayer’s books and records to determine compliance with tax laws in reporting the proper amount of tax. Auditors usually recommend additional tax assessments but may recommend a decrease or no change in the tax reported on the return, depending on the documentation provided by the taxpayer. If the taxpayer agrees to pay or does not respond to IRS’ notices on recommended additional taxes, IRS assesses the tax—that is, formally notifies the taxpayer that the specified amount of tax is owed and that interest and penalties may accrue if the tax is not paid by a certain date. Taxpayers who do not agree with the recommended additional taxes can (1) file a protest with the IRS Office of Appeals, (2) take the dispute to tax court without paying the recommended tax, or (3) pay the tax and claim a refund in the U.S. Court of Federal Claims or a federal district court. Of these options, taxpayers usually protest to IRS Appeals. Appeals settles most of these disputes, and the remainder are docketed for trial. If Appeals is unsuccessful in settling the dispute, the Office of Chief Counsel gets involved in settlement as well as in any trial. The agreements made in settlements and the rulings made in trials dictate how much of the disputed amount gets assessed. The assessed amount—not the recommended amount—establishes the taxpayer’s liability. If taxpayers do not pay the taxes that are assessed, IRS can take action to collect the taxes. IRS tracks the additional taxes recommended by audit classes, which are based on the amount of reported income or assets and type of return. Across the audit classes, tax returns vary in complexity, ranging from simple individual returns to complex corporation returns. The classes include specialized audit programs, such as the Coordinated Examination Program (CEP) for the nation’s largest corporations. The range of size and complexity across tax returns affects the amount of time and resources IRS uses to audit a return and resolve disputes over the assessment of recommended taxes. Audits of large corporations usually take 2 to 3 years. If the large corporation disputes the recommended additional taxes, another 2 to 3 years can elapse in trying to settle the dispute through Appeals; several additional years may be needed if the dispute goes to trial. For smaller, less complex returns, the time IRS uses to audit and settle any disputes over recommended tax assessments is shorter; but these processes usually take at least 1 year. To deal with this varying complexity, IRS has three types of auditors. First, lower graded tax examiners at IRS service centers audit simpler individual returns through correspondence. Second, higher graded revenue agents from IRS district offices visit individuals, corporations, and other types of taxpayers to address the most complex tax returns; they also work in teams to audit CEP and some other large corporations. Third, tax auditors, who usually do audits by meeting with taxpayers at a district office, fall in the middle ranges regarding their grades and the complexity of their audits. IRS’ appeals officers also differ by their grades and by the complexity and size of their workloads. To address our objectives on the assessments, collections, and costs related to taxes recommended in audits, we used IRS’ Enforcement Revenue Information System (ERIS) data as of September 27, 1997. Our analyses started with audits closed in fiscal year 1992. According to IRS officials, ERIS data were unreliable prior to 1992. We focused most of our analyses on 1992, instead of later years, to allow the most time for assessment and collection actions to have been taken on additional amounts recommended in audits. Because ERIS attempts to continually update the status of these recommended amounts, ERIS’ data could differ from data in other IRS systems that are not doing such updating. We did not test the reliability of the data provided to ERIS by other IRS systems or data processed by ERIS. However, we followed up with IRS on any anomalies that we found in the data and adjusted the data or our methodology as needed. For example, as a result of questioning by us and other IRS officials, ERIS officials discovered a mistake during April 1998 in the documentation provided to us for interpreting ERIS data on the settlement status of recommended taxes. Subsequent discussions with ERIS officials clarified the approach to use for interpreting the ERIS data. IRS officials concurred with the analytical results generated from using this approach. To determine how much of the recommended additional tax amounts had been settled, assessed, and collected as of September 27, 1997, for audits closed in fiscal years 1992 through 1997, we matched the recommended amounts to actions taken in Appeals or the Office of Chief Counsel. In calculating the final assessments, we subtracted amounts that IRS initially assessed from the additional taxes recommended but later abated. Then, for each fiscal year, we added the assessments and collections that took place in each IRS function—Examination, Appeals, and Chief Counsel—to obtain the overall results for the audits. We arrayed these results by the fiscal year of the audit closure to develop a broad picture of what happened to recommended tax amounts. (App. II shows the results for each fiscal year for seven types of audits.) To determine how much of the recommended additional tax amounts had been assessed and collected as of September 27, 1997, by type of audit closed in fiscal year 1992, we did analyses similar to those done for our first objective. We expanded the analyses to array the results by seven types of audits. We developed these 7 types from 30 subcategories that IRS used to classify audits by type of tax return, tax, or taxpayer (app. III shows the subcategories). The seven types include income tax audits at IRS service centers or at IRS district offices of nonbusiness individuals, business individuals, small corporations, non-CEP large corporations, and CEP corporations as well as those audited for other types of tax liabilities.At the request of IRS officials, we separately analyzed service center and CEP audits because of their specialized natures. To determine whether IRS’ broad measures of audit results fully represented audit revenues and costs, we reviewed IRS’ budget submissions for fiscal years 1998 and 1999 to see the data reported as audit results and performance measures. We analyzed available ERIS data on the revenues and costs associated with audits to identify audit measures that could be developed. We then created three ratios of tax revenues to costs for audits closed in fiscal year 1992. These ratios compared additional taxes recommended, assessed, and collected with the related direct costs through each of these stages. We arrayed these three ratios across the seven types of audits. For the revenues, we used available IRS data on the additional taxes recommended, assessed, and collected. For the costs, we used available data on direct time charged by staff who do the audits and settle disputes over the additional taxes recommended; data on the direct staff time to collect the additional taxes assessed were not available. We identified the direct hours charged by staff grade level in Examination, Appeals, and Chief Counsel. We then applied an hourly rate to the grade levels by using the General Schedule pay tables for 1992 through 1997. For each grade, we adjusted the midpoint of the pay scale to account for locality pay. We accounted for work hours available and costs of staff benefits, such as paid vacation and sick leave, by using Office of Management and Budget Circular No. A-76. Using these factors, we computed the direct staff costs. We discussed our methodology and our results with IRS officials who manage ERIS or who manage the audit, settlement, and collection activities. We incorporated their suggestions into our work as appropriate. We requested comments on a draft of this report from the Commissioner of Internal Revenue. His comments are discussed near the end of this letter and reprinted in appendix V. We performed our audit work at IRS’ National Office in Washington, D.C., between July 1997 and May 1998 in accordance with generally accepted government auditing standards. IRS annually reports to Congress on the amount of additional tax and penalties recommended in audits closed in each fiscal year. For the years we reviewed, the recommended amounts ranged up to about $32 billion. However, the recommended amount does not represent the actual revenue resulting from IRS’ audits. As of September 27, 1997, less than half of the recommended amount had been assessed as additional taxes, and not all of the assessed amount had subsequently been collected. For fiscal year 1992 audits, as of September 27, 1997, IRS had assessed $8.5 billion (34 percent) of the $24.8 billion in recommended additional taxes. IRS settled another 40 percent of the recommended additional taxes without ultimately assessing the recommended additional taxes for various reasons. These reasons included taxpayer claims; decisions in Appeals or the courts; and reductions (i.e., abatements) of amounts that had initially been assessed. Disputes on the final 26 percent of the additional recommended taxes had yet to be settled. As of September 27, 1997, IRS had collected $6.1 billion—or 72 percent of the additional taxes assessed and 25 percent of the additional taxes recommended. Table 1 shows these results for audits closed in fiscal year 1992 as well as for fiscal years 1993 through 1997. Since 1992, the results of the audits were less complete because larger percentages are unsettled for each succeeding year, reaching 55 percent in 1997. As more settlements occur over time, the rates at which the recommended taxes are assessed and collected should increase. IRS officials said they believe that this assessment rate is higher in more recent years compared with 1992 because IRS has been trying to obtain taxpayer agreement with any taxes recommended before the audit is closed. However, it is not yet clear whether the rate at which assessed amounts are collected for 1993 through 1997 audits will exceed the rate for 1992. Because of the incomplete results for audits closed in these more recent years, our analyses focused on 1992 audits. Table 2 shows how much of the recommended additional taxes had been settled, assessed, and collected as of September 27, 1997, for seven types of audits closed in fiscal year 1992. In general, the assessment and collection rates varied by the complexity of the audit. With more complex audits, such as corporation audits, taxpayers were more likely to reduce additional assessments by appealing amounts recommended but also more likely to pay almost all of the amounts assessed. For example: Assessment rates for the simpler audits done at service centers and for the audits of individuals were higher than for corporate audit categories. The rates were 76 percent for service center audits and about 59 percent for audits of individuals, but the rates were 20 percent for CEP audits and 33 percent for other large corporation audits. The rates for collection of assessed amounts were higher for corporation audits. IRS collected about half of the assessed amounts for service center and individual audits compared with 97 percent for CEP and 73 percent for other large corporation audits. The percentage of recommended amounts that IRS collects is also affected by these assessment and collection rates. For fiscal year 1992 audits, the percentage of recommended taxes that IRS had collected as of September 27, 1997, ranged from 20 percent for CEP audits to 43 percent for service center audits. We asked IRS officials to explain differences in the assessment and collection rates across the types of audits. IRS officials said several factors contributed to these variances. For example, comparing the service center and corporation audits, they said: The rate for assessing the recommended amount is higher for service center audits, because the taxpayers are (1) more likely to not respond to IRS correspondence and notices, which leads IRS to assess the recommended amounts and (2) less likely to dispute the recommended amounts, usually much smaller and involving simpler tax issues, compared with audits of corporations for which disputes are much more likely. The rate for collecting the assessed amount is lower for service center audits because the taxpayers tend to have fewer assets to pay the tax assessment. Further, CEP and large corporations are more likely to pay the assessments to avoid large interest charges. Small corporations and individuals with businesses also may have difficulties finding the funds to pay the additional tax assessments. IRS’ performance measures do not fully reflect all audit-based revenues and costs. For example, IRS’ existing measures on taxes recommended do not include indirect revenues resulting from the effect of audits on voluntary compliance. Although IRS measures the staff time directly spent on audits, it does not measure the dollar costs of this direct time or the indirect costs incurred by the Examination Division for such things as training time or office space. IRS also does not measure direct and indirect costs that audits create outside of the Examination Division for IRS as well as taxpayers. Compiling complete and reliable data on the indirect revenues and costs from outside IRS can be very difficult because of limitations with the data sources and research. Further, IRS does not use its available data to develop and report measures that would provide a fuller, more balanced picture of audit results. For example, data on taxes recommended could be balanced with data on taxes assessed and collected in reporting audit results and, as we previously recommended for audits of large corporations, in developing additional performance measures. In developing these measures, such revenue data could be related to information on the costs of audits. In addition, IRS has the capacity to track more data beyond the direct staff costs. IRS data on taxes recommended, assessed, and collected do not represent all revenues from audits. Data are not available on the indirect revenue effects from audits. For example, when audits induce both audited and unaudited taxpayers to be more voluntarily compliant, tax collections increase indirectly. Other indirect revenue effects occur if audits adjust how much of a tax deduction can be claimed in one tax year versus other years or reduce that claim for future years. IRS has difficulty measuring the indirect revenue effects because of limitations with the data and research methods. Such measurements require (1) data that reflect the impact of audits and other IRS activities on the compliance of individual taxpayers and (2) a research methodology that reliably distinguishes the effect of audits on voluntary compliance from other influencing factors. IRS has researched the indirect effects of audits but has yet to develop reliable estimates because of these limitations. IRS data also do not include the marginal revenue effects from doing audits. Such marginal effects are the changes in total revenues that result when IRS incrementally changes the number of audits in an audit class. As with indirect effects, developing data on marginal effects can be challenging. Without data on the indirect and marginal effects, IRS and Congress cannot know the full impacts of audits. IRS data on both its direct and indirect costs from doing audits are incomplete. IRS associates its direct costs with the time charged by the staff who do the audits, settle the audit disputes, and collect the audit assessments for specific types of audits. IRS has data available for computing the direct staff costs for the audit and settlement activities. However, IRS did not have such data for the collection activity because IRS’ Collection Division did not track the time that staff spent trying to collect the additional assessments arising from specific types of audits. Our analyses of the significance of the direct costs of collection were inconclusive because of missing data. Also, IRS did not have data on its indirect costs for an audit. IRS considers its indirect costs to include management time, training time, space, and other support given to those who do audits, settle audit disputes, and collect audit assessments. Likewise, IRS does not account for indirect costs outside IRS, such as those imposed on the audited taxpayer or on society when a taxpayer evades tax liabilities. However, as with indirect revenue, collecting the data for quantifying these external indirect costs is a difficult task. IRS has been working on ways to measure taxpayers’ costs and is starting to survey taxpayers on their satisfaction with the audit process. This survey does not gather data on taxpayers’ costs but may prove useful to IRS in deciding how to quantify these costs. Although IRS lacks data on all of the revenues and costs associated with audits, it does have data that could be used to measure selected revenues and costs. However, IRS does not measure and report all its existing data on audit revenues, such as additional taxes assessed and collected on the basis of taxes recommended in audits. Further, IRS has not developed data and measures on the costs related to each type of revenue. This report discusses broad, IRS-wide dollar measures of audit results to track actions on any additional taxes recommended in audits. Such measures do not account for all aspects of audit performance, such as the proper treatment of taxpayers and the decision to recommend no change or a reduction to the tax liability reported on a return. These measures are not intended to be used to evaluate the performance of individual IRS employees. Over the years, IRS has measured the overall results of audits done in the Examination Division by the amount of additional taxes recommended and time charged directly to an audit by Examination staff. These measures do not employ existing data that could more fully represent the revenues and costs associated with audits. First, the measures do not report how much of the recommended taxes are actually assessed and collected. Second, the measures do not report the dollar costs of the direct time charged to an audit or the indirect costs for the audit, settlement, and collection activities. Although useful in some ways, measuring audit performance by just taxes recommended and direct audit staff time presents an unbalanced picture of the audit results. Tables 1 and 2 show large differences between the amounts recommended, assessed, and collected. Taxpayers dispute most of the additional recommended amounts, and settlement of these disputes results in smaller amounts being assessed and collected. Relying too heavily on additional taxes recommended as a measure of audit results may create undesirable incentives. Our previous work on audits of large corporations has raised concerns that relying on recommended taxes as a performance indicator may encourage auditors to recommend taxes that would be unlikely to withstand taxpayer challenge and thus not be assessed and collected. To the extent that this happens, audited taxpayers could be unnecessarily burdened. We recommended that IRS balance its measures of audit performance by adding such measures as taxes ultimately collected. Further, the direct time charged to audits does not measure the dollar costs. In computing the costs of the direct time charges, one must recognize that the pay grade levels of staff assigned to audits vary by type of audit. Further, the direct time charged in the Examination Division excludes direct staff time charged in Appeals, Chief Counsel, and Collection as well as indirect costs for the audit, settlement, and collection activities. To illustrate the importance of developing a more complete set of measures, we compared three ratios that each measured a type of audit revenue with the related direct cost for seven types of audits. We calculated direct costs using hours charged by staff in Examination, Appeals, and Chief Counsel. The audit revenues included the amounts recommended, assessed, and collected. Table 3 shows that these ratios differ widely. Many factors affect audit-based revenues and costs. For example, recommended amounts are affected by the number of audits and amount recommended per audit. The number of auditors as well as their time charges and pay grades affect direct audit costs. Service center auditors have the lowest grades and charge the least amount of time per audit, while CEP auditors have the highest grades and charge the most amount of time. Also, the costs of settlement in corporate audits are likely to be higher than in other audits because corporations are more likely to dispute recommended assessments. As a result, reliance on a single measure gives a less complete picture of audit results than relying on all three measures. The ratios in table 3 should not be used as official measures of results because they do not account for all costs. If costs such as Collection’s direct staff costs and IRS’ indirect costs could be included, the ratios would be smaller, and the differences by type of audit could change significantly. Direct staff time accounts for about half of all time charged by auditors; much of the remaining time produces indirect costs. The allocation of time could vary by type of audit. For example, service center audits may have a higher proportion of indirect costs given IRS’ reliance on automation and nonaudit staff to help the auditors. Further, because individuals tend to pay their additional tax assessments more slowly than corporations (see app. IV), IRS would be likely to incur more costs to collect additional assessments from individuals. IRS has plans to develop a measure that approximates the least complete ratio of the three ratios. In its 1999 budget submission, IRS reported plans to develop a ratio of the taxes recommended to the audit costs. IRS officials said the audit cost side of the ratio will come from an activity-based costing model that IRS is developing. We talked to IRS officials about improving these ratios by capturing more data on the related costs. They said that tracking the direct staff time to collect audit assessments cannot now be done but would be possible if Collection Division staff began to report the time spent on audit-based assessments. These officials said IRS plans to use formulas to allocate indirect costs, such as for administration and rent, of the audit, settlement, and collection activities to the types of audits. They said that the model should, at a minimum, give them a better basis for knowing more about the nature and magnitude of the indirect costs. Further, we asked IRS about opportunities to more fully report existing data on the revenues generated by audits. Our interviews with IRS officials indicated that measuring and reporting information on taxes assessed and collected from audits, in addition to taxes recommended, would not be costly. Rather, the challenge would be to report these impacts in the most understandable and meaningful way. For example, the officials said amounts collected could be reported in a variety of ways, including by fiscal year of the audit closure, fiscal year of the collection, type of audit, and type of auditor. In considering the IRS-wide results of the audit function, analyzing and reporting the assessment and collection of amounts recommended provides a more complete picture of revenue impacts than that offered by looking just at recommended amounts. For example, our analyses of audits closed in fiscal year 1992 showed that IRS assessed and collected a fraction of the recommended additional taxes. By disputing recommended taxes, taxpayers substantially reduced the additional taxes that were ultimately assessed. IRS, however, did collect most of the assessed taxes. A closer analysis showed differences by the types of audit. IRS was more likely to assess taxes recommended in simpler audits of individuals than in complex audits of corporations. However, IRS was more likely to collect assessments from the corporations. We believe that analyses of existing IRS data can offer a fuller, more balanced picture of what happens to additional taxes recommended in audits. If IRS and Congress had access to data and analyses on the assessment and collection of recommended amounts by type of audit, they would have a more informed foundation for discussions and decisions on the audit function. For example, if certain types of audits recommend taxes that tend not to be assessed or collected, IRS may decide to analyze the reasons why and then make improvements to those audits or shift audit resources elsewhere. Another broad view of audit impacts would involve ratios that compare the direct tax revenues generated with IRS’ related costs. Such ratios can be developed with existing data on the direct costs incurred to recommend, assess, and collect additional taxes as a result of the audits. However, such ratios are not yet complete measures of audit results. For example, IRS’ data did not include its direct staff costs to collect the additional taxes and its indirect costs for the audit, assessment, and collection activity. Although incomplete as measures, the ratios provide more information on audit impacts compared with solely using data on additional taxes recommended in audits. These analyses could be enriched if IRS had data on its direct collection costs and its indirect costs for the audit, settlement, and collection activities. IRS is developing an activity-based costing model that could help IRS to account for these costs. The analyses could also benefit from IRS having data on the other indirect and marginal effects of audits on tax collections and costs but compiling such data is difficult to do. We are making no recommendations on these indirect and marginal effects because we did not attempt to collect and analyze data on these effects. We recommend that the Commissioner of Internal Revenue develop meaningful ways to report the results to Congress from tracking, over a reasonable number of future years, existing IRS data on the assessment and collection of additional amounts recommended in specific types of audits closed for each fiscal year. One option for developing reporting formats could be the tables used in this report. The reports would provide fuller measures of the impacts of audits across IRS than those just on taxes recommended. We also recommend that the Commissioner develop a way to track the direct staff costs of collecting tax assessments associated with specific types of audits. Similarly, the Commissioner also should determine how to account for IRS’ indirect costs in auditing returns, settling audit disputes, and collecting audit assessments by type of audits. In analyzing how to account for these indirect costs, IRS may find that the activity-based costing model being developed can serve as a helpful tool. On April 27, 1998, we obtained comments on a draft of this report during a meeting with officials representing IRS. They included the Assistant Commissioner for Examination and his staff, the National Director for Financial Analysis and his staff, the National Director for Compliance Specialization, and representatives for the National Director of Appeals, the Assistant Commissioner for Collection, and the National Director, Legislative Affairs Division. The IRS Commissioner also documented the comments in a letter dated May 27, 1998 (see app. V). Both at the meeting and in its letter, IRS agreed to implement our recommendations. For our recommendation on reporting the amounts of recommended taxes that are assessed and collected, IRS said it will annually report to Congress, by fiscal year, the amounts of recommended taxes that are collected. For our recommendation on tracking direct and indirect IRS costs associated with audits, IRS said it will continue to develop the activity-based costing model to track these costs by types of audit. IRS’ letter included an enclosure that provided various technical comments on issues discussed in our report as well as other issues. These comments dealt with issues such as (1) the need to carefully analyze and interpret ERIS data; (2) the challenges of allocating IRS’ costs; (3) the value of the activity-based costing model in allocating costs; (4) the ongoing use of ERIS data; (5) IRS’ efforts since 1992 to improve the audit and dispute resolution processes; (6) concerns about misinterpretations of the analyses on how much of the recommended tax amounts were settled, assessed, and collected; and (7) the need to analyze new measures. We have made changes and incorporated those comments that had a direct bearing on the information provided in this report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House Committee on Ways and Means, the Senate Committee on Finance, and other congressional committees with responsibility for IRS oversight; the IRS Commissioner; the Director of the Office of Management and Budget; the Secretary of the Treasury; and other interested parties. We will also make the report available to others upon request. Major contributors to this report are listed in appendix VI. Please contact me on (202) 512-9110 if you or your staff have any questions about this report. The Enforcement Revenue Information System (ERIS) is an automated data repository, outside the Internal Revenue Service (IRS) enforcement process, that contains IRS data on its enforcement results; these data come from several IRS sources. IRS developed ERIS to track information on the resolution of enforcement cases. Prior to the development of ERIS in 1990, IRS did not have a system that tracked the enforcement results from each fiscal year. The purpose of ERIS is to account for revenues collected and costs incurred as a result of IRS enforcement activities. In addition, ERIS provides a link to taxes assessed and collected for different types of cases tracked by enforcement activities. ERIS enhances IRS’ ability to provide a more complete reporting of enforcement results and forecast enforcement revenues. ERIS also may help IRS manage its enforcement activities better to the extent that it provides more complete reporting of costs and revenues. ERIS works by integrating data from the various enforcement functions with corresponding Master File data to build a comprehensive enforcement database. It merges data extracted from the Audit Information Management System, Information Reporting Program Case Analysis System, Individual and Business Master Files, Individual Retirement Account Master File, and Non-Master File. Once the data are integrated from various sources, IRS develops a summary database from which comprehensive reports are printed. As percent of amount recommended “Other” includes audits of returns for employment tax, estate tax, excise tax, and gift tax; audits conducted in IRS training; and audits categorized by IRS as other. Amounts do not add to total due to rounding. As percent of amount recommended “Other” includes audits of returns for employment tax, estate tax, excise tax, and gift tax; audits conducted in IRS training; and audits categorized by IRS as other. Amounts do not add to total due to rounding. As percent of amount recommended “Other” includes audits of returns for employment tax, estate tax, excise tax, and gift tax; audits conducted in IRS training; and audits categorized by IRS as other. Amounts do not add to total due to rounding. As percent of amount recommended “Other” includes audits of returns for employment tax, estate tax, excise tax, and gift tax; audits conducted in IRS training; and audits categorized by IRS as other. Amounts do not add to total due to rounding. As percent of amount recommended “Other” includes audits of returns for employment tax, estate tax, excise tax, and gift tax; audits conducted in IRS training; and audits categorized by IRS as other. Amounts do not add to total due to rounding. As percent of amount recommended (continued) Assessed = Tax and penalty assessed less related abatements 1040A = Nonbusiness returns filed by individuals TPI = Total positive income (income reported as a positive on tax return tables) C-TGR = Form 1040 Schedule C (profit or loss from business) total gross receipts F-TGR = Form 1040 Schedule F (profit or loss from farming) total gross receipts 1040A TPI < $25,000 Non-1040A TPI < $25,000 TPI $25,000 < $50,000 TPI $50,000 < $100,000 TPI $100,000 and over C-TGR < $25,000 C-TGR $25,000 < $100,000 C-TGR $100,000 and over F-TGR < $100,000 F-TGR $100,000 and over (continued) In developing ratios of the amount of taxes collected to the direct costs for the audit, assessment, and collection activities, we could not include IRS’ direct staff costs for the collection activity. We tried various analyses to gain insights on these costs to collect audit-based assessments, but none of our analyses were conclusive. In sum, IRS did not have data that would indicate the significance of these costs. For example, we found that about 10 percent of the additional taxes collected were collected through the direct involvement of Collection staff. However, IRS’ data did not help us to translate this information into the related direct staff costs. Nor did IRS have enough data to allow us to develop formulas for allocating the Collection Division’s overall staff costs to the direct staff costs of collecting audit-based tax assessments. Although our analyses did not help us compute the direct collection costs, we are reporting our results on how long IRS took to collect the audit-based tax assessments. To determine how much time IRS took to collect the tax assessments associated with audits closed in fiscal year 1992, we analyzed ERIS data on the amount of taxes collected from the various types of collection notices. Using IRS manuals, we determined the number of weeks that was to have elapsed between the assessment and each notice. These collections also accounted for amounts that taxpayers paid as a result of the audit before IRS made the official assessment. We analyzed the timing of collections by types of audits. For audits closed in fiscal year 1992, IRS had collected $6.1 billion of the $8.5 billion in additional assessments, as of September 27, 1997. Although the time taken to do the audits and settle any disputes can be quite lengthy, IRS usually collected any additional taxes from doing the audits prior to or soon after assessment; about 81 percent was collected prior to assessment and within the first 5 weeks after assessment. IRS collects taxes prior to assessment to the extent that taxpayers overwithhold their income taxes, overestimate their quarterly tax payments for the audited tax return, carry over excess tax payments from previous tax returns, or make a payment prior to the additional assessment to prevent the accrual of further interest. Specifically, these analyses showed that IRS collected taxes sooner from corporations than from individuals. IRS collected 95 percent of the taxes from CEP corporations and 92 percent from other large corporations before the assessments or within the first 5 weeks of the assessments. For individuals audited at service centers or district offices, IRS collected about 60 percent of the taxes within these time periods. However, the portion of the taxes that IRS collected after 15 weeks was much higher for individuals than for corporations; staff from the Collection Division become involved after this time. For example, less than 5 percent of the taxes collected from CEP and other large corporations were collected after 15 weeks. About 25 percent of the taxes collected from individuals audited at the service centers and district offices were collected after this period. Table IV.1 and table IV.2 show the timing of collections of assessed taxes for the various types of audits closed in fiscal year 1992. Percent collected over 15 weeks after assessment “Other” includes audits of returns for employment tax, estate tax, excise tax, and gift tax; audits conducted in IRS training; and audits categorized by IRS as other. Amounts do not add to total due to rounding. Percent collected over 15 weeks after assessment (continued) Percent collected over 15 weeks after assessment 1040A = Nonbusiness returns filed by individuals TPI = Total positive income (income reported as a positive on tax return tables) C-TGR = Form 1040 Schedule C (profit or loss from business) total gross receipts F-TGR = Form 1040 Schedule F (profit or loss from farming) total gross receipts Amounts do not add to total due to rounding. Royce L. Baker, Tax Issue Area Coordinator James A. Slaterbeck, Evaluator-in-Charge Bradley L. Terry, Evaluator Thomas N. Bloom, Computer Specialist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the Internal Revenue Service's (IRS) measures of the results of its audits of tax returns, focusing on: (1) how much of the additional taxes recommended in all types of audits that were closed in fiscal year (FY) 1992 through FY 1997 had been settled or were still in dispute and, if settled, how much had been assessed and collected as of September 27, 1997; (2) how much of the recommended additional taxes had been assessed and collected for audits closed in FY 1992; and (3) whether broad IRS measures of audit results fully represented audit revenues and costs. GAO noted that: (1) for audits closed in FY 1992 through FY 1997, IRS recommended tens of billions of dollars in additional taxes for each year; (2) however, not all recommended taxes are assessed; and not all assessed taxes are collected; (3) IRS had settled 40 percent of FY 1992 audits without assessing the recommended taxes, usually because of IRS Office of Appeals' decisions, and had yet to settle the assessment status of the other 26 percent; (4) of the $8.5 billion assessed, IRS had collected 72 percent, which means that 25 percent of all recommended taxes for FY 1992 audits had been collected as of September 27, 1997; (5) for audits closed in FY 1993 through FY 1997, assessment and collection results were less complete because less time had elapsed for these actions to occur compared with 1992; (6) the assessment and collection rates varied by the type of audit closed in FY 1992; (7) in general, IRS assessed a higher percentage of the assessed taxes for simpler audits compared with complex audits; (8) however, IRS collected a higher percentage of the recommended taxes from the simpler audits than complex audits; (9) for simpler service center audits, IRS had assessed 76 percent of the recommended taxes but had collected 56 percent of the assessed taxes as of September 27, 1997; (10) at the other extreme, after audits of complex returns from Coordinated Examination Program (CEP) corporations, IRS had assessed 20 percent of the recommended taxes but had collected 97 percent of the assessed taxes; (11) as of September 27, 1997, 39 percent of the amounts recommended in CEP audits were still in dispute; (12) IRS' existing performance measures do not cover all audit-based revenues or costs; (13) measuring the taxes recommended does not account for the related assessments and collections; nor does it account for indirect revenue gains; (14) measuring other types of revenues are important because not all recommended taxes are assessed or collected; (15) IRS measures the staff time directly charged to audits but not the dollar costs of this direct time; (16) compiling complete and reliable data on the indirect revenues and taxpayer costs can be very difficult to do because of limitations in the data sources; (17) beyond these limitations, IRS did not use other existing data to develop and report measures that more fully represented audit results; (18) for additional measure, audit revenues could be compared with related costs; and (19) to develop such measures, IRS would need more data on both direct and indirect costs.
|
In general, SCHIP funds are targeted to uninsured children in families whose incomes are too high to qualify for Medicaid but are at or below 200 percent of FPL. Recognizing the variability in state Medicaid programs, federal SCHIP law allows a state to cover children up to 200 percent of the poverty level or 50 percentage points above its existing Medicaid eligibility standard as of March 31, 1997. Additional flexibility regarding eligibility levels is available, however, as Medicaid and SCHIP provide some flexibility in how a state defines income for purposes of eligibility determinations. Congress appropriated approximately $40 billion over 10 years (from fiscal year 1998 through 2007) for distribution among states with approved SCHIP plans. Allocations to states are based on a formula that takes into account the number of low- income children in a state. In general, states that choose to expand Medicaid to enroll eligible children under SCHIP must follow Medicaid rules, while separate child health programs have additional flexibilities in benefits, cost-sharing, and other program elements. Under certain circumstances, states may also cover adults under SCHIP. SCHIP allotments to states are based on an allocation formula that uses (1) the number of children, which is expressed as a combination of two estimates—the number of low-income children without health insurance and the number of all low-income children, and (2) a factor representing state variation in health care costs. Under federal SCHIP law and subject to certain exceptions, states have 3 years to use each fiscal year’s allocation, after which any remaining funds are redistributed among the states that had used all of that fiscal year’s allocation. Federal law does not specify a redistribution formula but leaves it to the Secretary of Health and Human Services to determine an appropriate procedure for redistribution of unused allocations. Absent congressional action, states are generally provided 1 year to spend any redistributed funds, after which time funds may revert to the U.S. Treasury. Each state’s SCHIP allotment is available as a federal match based on state expenditures. SCHIP offers a strong incentive for states to participate by providing an enhanced federal matching rate that is based on the federal matching rate for a state’s Medicaid program—for example, the federal government will reimburse at a 65 percent match under SCHIP for a state receiving a 50 percent match under Medicaid. There are different formulas for allocating funds to states, depending on the fiscal year. For fiscal years 1998 and 1999, the formula used estimates of the number of low-income uninsured children to allocate funds to states. For fiscal year 2000, the formula changed to include estimates of the total number of low-income children as well. SCHIP gives the states the choice of three design approaches: (1) a Medicaid expansion program, (2) a separate child health program with more flexible rules and increased financial control over expenditures, or (3) a combination program, which has both a Medicaid expansion program and a separate child health program. Initially, states had until September 30, 1998, to select a design approach, submit their SCHIP plans, and obtain HHS approval in order to qualify for their fiscal year 1998 allotment. With an approved state child health plan, a state could begin to enroll children and draw down its SCHIP funds. The design approach a state chooses has important financial and programmatic consequences, as shown below. Expenditures. In separate child health programs, federal matching funds cease after a state expends its allotment, and non-benefit-related expenses (for administration, direct services, and outreach) are limited to 10 percent of claims for services delivered to beneficiaries. In contrast, Medicaid expansion programs may continue to receive federal funds for benefits and for non-benefit-related expenses at the Medicaid matching rate after states exhaust their SCHIP allotments. Enrollment. Separate child health programs may establish separate eligibility rules and establish enrollment caps. In addition, a separate child health program may limit its own annual contribution, create waiting lists, or stop enrollment once the funds it budgeted for SCHIP are exhausted. A Medicaid expansion must follow Medicaid eligibility rules regarding income, residency, and disability status, and thus cannot limit enrollment. Benefits. Separate child health programs must use, for example, benchmark benefit standards that use specified private or public insurance plans as the basis for coverage. However, Medicaid—and therefore a Medicaid expansion—must provide coverage of all benefits available to the Medicaid population, including certain services for children. In particular, Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) requires states to cover treatments or stabilize conditions diagnosed during routine screenings—regardless of whether the benefit would otherwise be covered under the state’s Medicaid program. A separate child health program does not require EPSDT coverage. Beneficiary cost-sharing. Separate child health programs may impose limited cost-sharing—through premiums, copayments, or enrollment fees—for children in families with incomes above 150 percent of FPL up to 5 percent of family income annually. Since the Medicaid program did not previously allow cost-sharing for children, a Medicaid expansion program under SCHIP would have followed this rule. In general, states may cover adults under the SCHIP program under two key approaches. First, federal SCHIP law allows the coverage of adults in families with children eligible for SCHIP if a state can show that it is cost-effective to do so and demonstrates that such coverage does not result in “crowd-out”—a phenomenon in which new public programs or expansions of existing public programs designed to extend coverage to the uninsured prompt some privately insured persons to drop their private coverage and take advantage of the expanded public subsidy. The cost-effectiveness test requires the states to demonstrate that covering both adults and children in a family under SCHIP is no more expensive than covering only the children. The states may also elect to cover children whose parents have access to employer-based or private health insurance coverage by using SCHIP funding to subsidize the cost. Second, under section 1115 of the Social Security Act, states may receive approval to waive certain Medicaid or SCHIP requirements. The Secretary of Health and Human Services may approve waivers of statutory requirements in the case of experimental, pilot, or demonstration projects that are likely to promote program objectives. In August 2001, HHS indicated that it would allow states greater latitude in using section 1115 demonstration projects (or waivers) to modify their Medicaid and SCHIP programs and that it would expedite consideration of state proposals. One initiative, the Health Insurance Flexibility and Accountability Initiative (HIFA), focuses on proposals for covering more uninsured people while at the same time not raising program costs. States have received approval of section 1115 waivers that provide coverage of adults using SCHIP funding. SCHIP enrollment increased rapidly over the first years of the program, and has stabilized for the past several years. In 2005, the most recent year for which data are available, 4.0 million individuals were enrolled during the month of June, while the total enrollment count—which represents a cumulative count of individuals enrolled at any time during fiscal year 2005—was 6.1 million. Of these 6.1 million enrollees, 639,000 were adults. Because SCHIP requires that applicants are first screened for Medicaid eligibility, some states have experienced increases in their Medicaid programs as well, further contributing to public health insurance coverage of low-income children during this same period. Based on a 3-year average of 2003 through 2005 CPS data, the percentage of uninsured children varied considerably by state, with a national average of 11.7 percent. SCHIP annual enrollment grew quickly from program inception through 2002 and then stabilized at about 4 million from 2003 through 2005, on the basis of a point-in-time enrollment count. Total enrollment, which counts individuals enrolled at any time during a particular fiscal year, showed a similar pattern of growth and was over 6 million as of June 2005 (see fig. 1). Generally, point-in-time enrollment is a subset of total enrollment, as it represents the number of individuals enrolled during a particular month. In contrast, total enrollment includes an unduplicated count of any individual enrolled at any time during the fiscal year; thus the data are cumulative, with new enrollments occurring monthly. Because states must also screen for Medicaid eligibility before enrolling children into SCHIP, some states have noted increased enrollment in Medicaid as a result of SCHIP. For example, Alabama reported a net increase of approximately 121,000 children in Medicaid since its SCHIP program began in 1998. New York reported that, for fiscal year 2005, approximately 204,000 children were enrolled in Medicaid as a result of outreach activities, compared with 618,973 children enrolled in SCHIP. In contrast, not all states found that their Medicaid enrollment was significantly affected by SCHIP. For example, Idaho reported that a negligible number of children were found eligible for Medicaid as a result of outreach related to its SCHIP program. Maryland identified an increase of 0.2 percent between June 2004 and June 2005. Based on a 3-year average of 2003 through 2005 CPS data, the percentage of uninsured children varied considerably by state and had a national average of 11.7 percent. The percentage of uninsured children ranged from 5.6 percent in Vermont to 20.4 percent in Texas (see fig. 2). Generally, the proportion of children without insurance tended to be lower in the Midwest or Northeast and higher in the South and the West. States’ SCHIP programs reflect the flexibility allowed in structuring approaches to providing health care coverage, including their choice among three program designs—Medicaid expansions, separate child health programs, and combination programs, which have both a Medicaid expansion and a separate child health program component. As of fiscal year 2005, 41 state SCHIP programs covered children in families whose incomes are up to 200 percent FPL or higher, with 7 of the 41 states covering children in families whose incomes are at 300 percent FPL or higher. States generally imposed some type of cost-sharing in their programs, with 39 states charging some combination of premiums, copayments, or enrollment fees, compared with 11 states that did not charge cost-sharing. Nine states reported operating premium assistance programs that use SCHIP funding to subsidize the cost of premiums for private health insurance coverage. As of February 2007, we identified 14 states with approved section 1115 waivers to cover adults, including parents, pregnant women, and, in some cases, childless adults. Of the 50 states currently operating SCHIP programs, as of July 2006, 11 states had Medicaid expansion programs, 18 states had separate child health programs, and 21 states had a combination of both approaches (see fig. 3). When the states initially designed their SCHIP programs, 27 states opted for expansions to their Medicaid programs. Many of these initial Medicaid expansion programs served as “placeholders” for the state—that is, minimal expansions in Medicaid eligibility were used to guarantee the 1998 fiscal year SCHIP allocation while allowing time for the state to plan a separate child health program. Other initial Medicaid expansions— whether placeholders or part of a combination program—also accelerated the expansion of coverage for children aged 14 to 18 up to 100 percent of FPL, which states are already required to cover under federal Medicaid law. A state’s starting point for SCHIP eligibility is dependent upon the eligibility levels previously established in its Medicaid program. Under federal Medicaid law, all state Medicaid programs must cover children aged 5 and under if their family incomes are at or below 133 percent of FPL and children aged 6 through 18 if their family incomes are at or below 100 percent of FPL. Some states have chosen to cover children in families with higher income levels in their Medicaid programs. Each state’s starting point essentially creates a “corridor”—generally, SCHIP coverage begins where Medicaid ends and then continues upward, depending on each state’s eligibility policy. In fiscal year 2005, 41 states used SCHIP funding to cover children in families with incomes up to 200 percent of FPL or higher, including 7 states that covered children in families with incomes up to 300 percent of FPL or higher. In total, 27 states provided SCHIP coverage for children in families with incomes up to 200 percent of FPL, which was $38,700 for a family of four in 2005. Another 14 states covered children in families with incomes above 200 percent of FPL, with New Jersey reaching as high as 350 percent of FPL in its separate child health program. Finally, 9 states set SCHIP eligibility levels for children in families with incomes below 200 percent of FPL. For example, North Dakota covered children in its separate child health program up to 140 percent of FPL. (See fig. 4.) Under federal SCHIP law, states with separate child health programs have the option of using different bases for establishing their benefit packages. Separate child health programs can choose to base their benefit packages on (1) one of several benchmarks specified in federal SCHIP law, such as the Federal Employees Health Benefits Program (FEHBP) or state employee coverage; (2) a benchmark-equivalent set of services specified in the statute; (3) coverage equivalent to state-funded child health programs in Florida, New York, or Pennsylvania; or (4) a benefit package approved by the Secretary of Health and Human Services (see table 1). In some cases, separate child health programs have changed their benefit packages, adding and removing benefits over time, as follows: In 2003, Texas discontinued dental services, hospice services, skilled nursing facilities coverage, tobacco cessation programs, vision services, and chiropractic services. In 2005, the state added many of these services (chiropractic services, hospice services, skilled nursing facilities, tobacco cessation services, and vision care) back into the SCHIP benefit package and increased coverage of mental health and substance abuse services. In January 2002, Utah changed its benefit structure for dental services, reducing coverage for preventive (cleanings, examinations, and x-rays) and emergency dental services in order to cover as many children as possible with limited funding. In September 2002, the dental benefit package was further restructured to include coverage for an accidental dental benefit, fluoride treatments, and sealants. In 2005, most states’ SCHIP programs required families to contribute to the cost of care with some kind of cost-sharing requirement. The two major types of cost-sharing—premiums and copayments—can have different behavioral effects on an individual’s participation in a health plan. Generally, premiums are seen as restricting entry into a program, whereas copayments affect the use of services within the program. There is research indicating that if cost-sharing is too high, or imposed on families whose income is too low, it can impede access to care and create financial burdens for families. In 2005, states’ annual SCHIP reports showed that 39 states had some type of cost-sharing—premiums, copayments, or enrollment fees—while 11 states reported no cost-sharing in their SCHIP programs. Overall, 16 states charged premiums and copayments, 14 states charged premiums only, and 9 states charged copayments only (see fig. 5). Cost-sharing occurred more frequently in the separate child health programs than in Medicaid expansion programs. For example, 8 states with Medicaid expansion programs had cost-sharing requirements, compared with 34 states operating separate child health program components. The amount of premiums charged varied considerably among the states that charged cost-sharing. For example, premiums ranged from $5.00 per family per month for children in families with incomes from 150 to 200 percent of FPL in Michigan to $117 per family per month for children in families with incomes from 300 to 350 percent of FPL in New Jersey. Federal SCHIP law prohibits states from imposing cost-sharing on SCHIP-eligible children that totals more than 5 percent of family income annually. In addition, cost-sharing for children may be imposed on the basis of family income. For example, we earlier reported that in 2003, Virginia SCHIP copayments for children in families with incomes from 133 percent to below 150 percent of FPL were $2 per physician visit or per prescription and $5 for services for children in families with higher incomes. In fiscal year 2005, nine states reported operating premium assistance programs (see table 2), but implementation remains a challenge. Enrollment in these programs varied across the states. For example, Louisiana reported having under 200 enrollees and Oregon reported having nearly 6,000 enrollees. To be eligible for SCHIP, a child must not be covered under any other health coverage program or have private health insurance. However, some uninsured children may live in families with access to employer-sponsored health insurance coverage. Therefore, states may choose to establish premium assistance programs, where the state uses SCHIP funds to contribute to health insurance premium payments. To the extent that such coverage is not equivalent to the states’ Medicaid or SCHIP level of benefits, including limited cost-sharing, states are required to pay for supplemental benefits and cost-sharing to make up this difference. Under certain section 1115 waivers, however, states have not been required to provide this supplemental coverage to participants. Several states reported facing challenges implementing their premium assistance programs. Louisiana, Massachusetts, New Jersey, and Virginia cited administration of the program as labor intensive. For example, Massachusetts noted that it is a challenge to maintain current information on program participants’ employment status, choice of health plan, and employer contributions, but such information is needed to ensure accurate premium payments. Two states—Rhode Island and Wisconsin—noted the challenges of operating premium assistance programs, given changes in employer-sponsored health plans and accompanying costs. For example, Rhode Island indicated that increases in premiums are being passed to employees, which makes it more difficult to meet cost-effectiveness tests applicable to the purchase of family coverage. States opting to cover adult populations using SCHIP funding may do so under an approved section 1115 waiver. As of February 2007, we identified 14 states with approved waivers to cover at least one of three categories of adults: parents of eligible Medicaid and SCHIP children, pregnant women, and childless adults. (See table 3.) The DRA, however, has prohibited the use of SCHIP funds to cover nonpregnant childless adults. Effective October 1, 2005, the Secretary of Health and Human Services may not approve new section 1115 waivers that use SCHIP funds for covering nonpregnant childless adults. However, waivers for covering these adults that were approved prior to this date are allowed to continue until the end of the waiver. Additionally, the Secretary may continue to approve section 1115 waivers that extend SCHIP coverage to pregnant adults, as well as parents and other caretaker relatives of children eligible for Medicaid or SCHIP. SCHIP program spending was low initially, as many states did not implement their programs or report expenditures until 1999 or later, but spending was much higher in the program’s later years and now threatens to exceed available funding. Beginning in fiscal year 2002, states together spent more federal dollars than they were allotted for the year and thus relied on the 3-year availability of SCHIP allotments or on redistributed SCHIP funds to cover additional expenditures. But as spending has grown, the pool of funds available for redistribution has shrunk. Some states consistently spent more than their allotted funds, while other states consistently spent less. Overall, 18 states were projected to have shortfalls—that is, they were expected to exhaust available funds, including current and prior-year allotments—in at least 1 year from 2005 through 2007. These shortfall states were more likely to have a Medicaid component to their SCHIP program, cover children across a broader range of income groups, and cover adults through section 1115 waivers than were the 32 states that were not projected to have shortfalls. In addition, the shortfall states that covered adults generally began covering them earlier than nonshortfall states. To cover projected shortfalls that several states faced, Congress appropriated an additional $283 million in fiscal year 2006. SCHIP program spending began low, but by fiscal year 2002, states’ aggregate annual spending from their federal allotments exceeded their annual allotments. Spending was low in the program’s first 2 years because many states did not implement their programs or report expenditures until fiscal year 1999 or later. Combined federal and state spending was $180 million in 1998 and $1.3 billion in 1999. However, by the end of the program’s third fiscal year (2000), all 50 states and the District of Columbia had implemented their programs and were drawing down their federal allotments. Since fiscal year 2002, SCHIP spending has grown by an average of about 10 percent per year. (See fig. 6.) From fiscal year 1998 through 2001, annual federal SCHIP expenditures were well below annual allotments, ranging from 3 percent of allotments in fiscal year 1998 to 63 percent in fiscal year 2001. In fiscal year 2002, the states together spent more federal dollars than they were allotted for the year, in part because total allotments dropped from $4.25 billion in fiscal year 2001 to $3.12 billion in fiscal year 2002, marking the beginning of the so-called “SCHIP dip.” However, even after annual SCHIP appropriations increased in fiscal year 2005, expenditures continued to exceed allotments (see fig. 7). Generally, states were able to draw on unused funds from prior years’ allotments to cover expenditures incurred in a given year that were in excess of their allotment for that year, because, as discussed earlier, the federal SCHIP law gave states 3 years to spend each annual allotment. In certain circumstances, states also retained a portion of unused allotments. States that have outspent their annual allotments over the 3-year period of availability have also relied on redistributed SCHIP funds to cover excess expenditures. But as overall spending has grown, the pool of funds available for redistribution has shrunk from a high of $2.82 billion in unused funds from fiscal year 1999 to $0.17 billion in unused funds from fiscal year 2003. Meanwhile, the number of states eligible for redistributions has grown from 12 states in fiscal year 2001 to 40 states in fiscal year 2006. (See fig. 8.) Congress has acted on several occasions to change the way SCHIP funds are redistributed. In fiscal years 2000 and 2003, Congress amended statutory provisions for the redistribution and availability of unused SCHIP allotments from fiscal years 1998 through 2001, reducing the amounts available for redistribution and allowing states that had not exhausted their allotments by the end of the 3-year period of availability to retain some of these funds for additional years. Despite these steps, $1.4 billion in unused SCHIP funds reverted to the U.S. Treasury by the end of fiscal year 2005. Congress has also appropriated additional funds to cover states’ projected SCHIP program shortfalls. The DRA included a $283 million appropriation to cover projected shortfalls for fiscal year 2006. CMS divided these funds among 12 states as well as the territories. In the beginning of fiscal year 2007, Congress acted to redistribute unused SCHIP allotments from fiscal year 2004 to states projected to face shortfalls in fiscal year 2007. The National Institutes of Health Reform Act of 2006 makes these funds available to states in the order in which they experience shortfalls. In January 2007, the Congressional Research Service (CRS) projected that although 14 states will face shortfalls, the $147 million in unused fiscal year 2004 allotments will be redistributed to the five states that are expected to experience shortfalls first. The NIH Reform Act also created a redistribution pool of funds by redirecting fiscal year 2005 allotments from states that at midyear (March 31, 2007) have more than twice the SCHIP funds they are projected to need for the year. Some states consistently spent more than their allotted funds, while other states consistently spent less. From fiscal years 2001 through 2006, 40 states spent their entire allotments at least once, thereby qualifying for redistributions of other states’ unused allotments; 11 states spent their entire allotments in at least 5 of the 6 years that funds were redistributed. Moreover, 18 states were projected to face shortfalls—that is, they were expected to exhaust available funds, including current and prior-year allotments—in at least 1 of the final 3 years of the program. (See fig. 9). When we compared the 18 states that were projected to have shortfalls with the 32 states that were not, we found that the shortfall states were more likely to have a Medicaid component to their SCHIP program, to have a SCHIP eligibility corridor broader than the median, and to cover adults in SCHIP under section 1115 waivers (see table 4). Fifteen of the 18 shortfall states (83 percent) had Medicaid expansion programs or combination programs that included Medicaid expansions, which must follow Medicaid rules, such as providing the full Medicaid benefit package and continuing to provide coverage to all eligible individuals even after the states’ SCHIP allotments are exhausted. The shortfall states tended to have a broader eligibility corridor in their SCHIP programs, indicating that, on average, the shortfall states covered children in SCHIP from lower income levels, from higher income levels, or both. For example, 33 percent of the shortfall states covered children in their SCHIP programs above 200 percent of FPL, compared with 25 percent of the nonshortfall states. Finally, 6 of the 18 shortfall states (33 percent) were covering adults in SCHIP under section 1115 waivers by the end of fiscal year 2006, compared with 6 of the 32 nonshortfall states (19 percent). On average, the shortfall states that covered adults began covering them earlier than nonshortfall states and enrolled a higher proportion of adults. At the end of fiscal year 2006, 12 states covered adults under section 1115 waivers using SCHIP funds. Five of these 12 states began covering adults before fiscal year 2003, and all 5 states faced shortfalls in at least 1 of the final 3 years of the program. In contrast, none of the 5 states that began covering adults with SCHIP funds in the period from fiscal year 2004 through 2006 faced shortfalls. On average, the shortfall states covered adults more than twice as long as nonshortfall states (5.1 years compared with 2.3 years by the end of fiscal year 2006). Shortfall states also enrolled a higher proportion of adults. Nine states, including six shortfall states, covered adults using SCHIP funds throughout fiscal year 2005. In these nine states, adults accounted for an average of 45 percent of total enrollment. However, in the shortfall states, the average proportion was more than twice as high as in nonshortfall states. Adults accounted for an average of 55 percent of enrollees in the shortfall states, compared with 24 percent in the nonshortfall states. (See table 5.) While analyses of states as a group reveal some broad characteristics of states’ programs, examining the experiences of individual states offers insights into other factors that have influenced states’ program balances. States themselves have offered a variety of reasons for shortfalls and surpluses. These examples, while not exhaustive, highlight a few factors that have shaped states’ financial circumstances under SCHIP, including the following: Inaccuracies in the CPS-based estimates on which states’ allotments were based. North Carolina, a shortfall state, offers a case in point. In 2004, the state had more low-income children enrolled in the program than CPS estimates indicated were eligible. To curb spending, North Carolina shifted children through age 5 from the state’s separate program to a Medicaid expansion, reduced provider payments, and limited enrollment growth. Annual funding levels that did not reflect enrollment growth. Iowa, another shortfall state, noted that annual allocations provided too many funds in the early years of the program and too few in the later years. Iowa did not use all its allocations in the first 4 years and thus the state’s funds were redistributed to other states. Subsequently, however, the state has faced shortfalls as its program matured. Impact of policies designed to curb or expand program growth. Some states have attempted to manage program growth through ongoing adjustments to program parameters and outreach efforts. For example, when Florida’s enrollment exceeded a predetermined target in 2003, the state implemented a waiting list and eliminated outreach funding. When enrollment began to decline, the state reinstituted open enrollment and outreach. Similarly, Texas⎯commensurate with its budget constraints and projected surpluses⎯has tightened and loosened eligibility requirements and limited and expanded benefits over time in order to manage enrollment and spending. Children without health insurance are at increased risk of forgoing routine medical and dental care, immunizations, treatment for injuries, and treatment for chronic illnesses. Yet, the states and the federal government face challenges in their efforts to continue to finance health care coverage for children. As health care consumes a growing share of state general fund or operating budgets, slowdowns in economic growth can affect states’ abilities—and efforts—to address the demand for public financing of health services. Moreover, without substantive programmatic or revenue changes, the federal government faces near- and long-term fiscal challenges as the U.S. population ages because spending for retirement and health care programs will grow dramatically. Given these circumstances, we would like to suggest several issues for consideration as Congress addresses the reauthorization of SCHIP. These include the following: Maintaining flexibility without compromising the goals of SCHIP. The federal-state SCHIP partnership has provided an important opportunity for innovation on the part of states for the overall benefit of children’s health. Providing three design choices for states—Medicaid expansions, separate child health programs, or a combination of both approaches—affords them the opportunity to focus on their own unique and specific priorities. For example, expansions of Medicaid offer Medicaid’s comprehensive benefits and administrative structures and ensure children’s coverage if states exhaust their SCHIP allotments. However, this entitlement status also increases financial risk to states. In contrast, SCHIP separate child health programs offer a “block grant” approach to covering children. As long as the states meet statutory requirements, they have the flexibility to structure coverage on an employer-based health plan model and can better control program spending than they can with a Medicaid expansion. However, flexibility within the SCHIP program, such as that available through section 1115 waivers, may also result in consequences that can run counter to SCHIP’s goal—covering children. For example, we identified 14 states that have authority to cover adults with their federal SCHIP funds, with several states covering more adults than children. States’ rationale is that covering low-income parents in public programs such as SCHIP and Medicaid increases the enrollment of eligible children as well, with the result that fewer children go uninsured. Federal SCHIP law provides that families may be covered only if such coverage is cost- effective; that is, covering families costs no more than covering the SCHIP- eligible children. We earlier reported that HHS had approved state proposals for section 1115 waivers to use SCHIP funds to cover parents of SCHIP- and Medicaid-eligible children without regard to cost- effectiveness. We also reported that HHS approved state proposals for section 1115 waivers to use SCHIP funds to cover childless adults, which in our view was inconsistent with federal SCHIP law and allowed SCHIP funds to be diverted from the needs of low-income children. We suggested that Congress consider amending the SCHIP statute to specify that SCHIP funds were not available to provide health insurance coverage for childless adults. Under the DRA, Congress prohibited the Secretary of Health and Human Services from approving any new section 1115 waivers to cover nonpregnant childless adults after October 1, 2005, but allowed waivers approved prior to that date to continue. It is important to consider the implications of states’ use of allowable flexibility for other aspects of their programs. For example, what assurances exist that SCHIP funds are being spent in the most cost- effective manner, as required under federal law? In view of current federal fiscal constraints, to what extent should SCHIP funds be available for adult coverage? How has states’ use of available flexibility to establish expanded financial eligibility categories and covered populations affected their ability to operate their SCHIP programs within the original allotments provided to them? Considering the federal financing strategy, including the financial sustainability of public commitments. As SCHIP programs have matured, states’ spending experience can help inform future federal financing decisions. CRS testified in July 2006 that 40 states were now spending more annually than they received in their annual original SCHIP allotments. While many of them did not face shortfalls in 2006 because of available prior-year balances, redistributed funds, and the supplemental DRA appropriation, 14 states are currently projected to face shortfalls in 2007. With the pool of funds available for redistribution virtually exhausted, the continued potential for funding shortfalls for many states raises some fundamental questions about SCHIP financing. If SCHIP is indeed a capped grant program, to what extent does the federal government have a responsibility to address shortfalls in individual states, especially those that have chosen to expand their programs beyond certain parameters? In contrast, if the policy goal is to ensure that states do not exhaust their federal SCHIP allotments, by providing for the continuing redistribution of funds or additional federal appropriations, does the program begin to take on the characteristics of an entitlement similar to Medicaid? What overall implications does this have for the federal budget? Assessing issues associated with equity. The 10 years of SCHIP experience that states now have could help inform any policy decisions with respect to equity as part of the SCHIP reauthorization process. Although SCHIP generally targets children in families with incomes at or below 200 percent of FPL, 9 states are relatively more restrictive with their eligibility levels, while 14 states are more expansive, ranging as high as 350 percent of FPL. Given the policy goal of reducing the rate of uninsured among the nation’s children, to what extent should SCHIP funds be targeted to those states that have not yet achieved certain minimum coverage levels? Given current and future federal fiscal constraints, to what extent should the federal government provide federal financial participation above certain thresholds? What broader implications might this have for flexibility, choice, and equity across state programs? Another consideration is whether the formulas used in SCHIP—both the formula to determine the federal matching rate and the formula to allocate funds to states—could be refined to better target funding to certain states for the benefit of covering uninsured children. Because the SCHIP formula is based on the Medicaid formula for federal matching funds, it has some inherent shortcomings that are likely beyond the scope of consideration for SCHIP reauthorization. For the allocation formula that determines the amount of funds a state will receive each year, several analysts, including CRS, have noted alternatives that could be considered. These include altering the methods for estimating the number of children at the state level, adjusting the extent to which the SCHIP formula for allocating funds to states includes the number of uninsured versus low-income children, and incorporating states’ actual spending experiences to date into the formula. Considering the effects of any one or combination of these—or other—policy options would likely entail iterative analysis and thoughtful consideration of relevant trade-offs. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other members of the Committee may have. For future contacts regarding this testimony, please contact Kathryn G. Allen at (202) 512-7118 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Carolyn L. Yocom, Assistant Director; Nancy Fasciano; Kaycee M. Glavich; Paul B. Gold; JoAnn Martinez-Shriver; and Elizabeth T. Morrison made key contributions to this statement. Children’s Health Insurance: Recent HHS-OIG Reviews Inform the Congress on Improper Enrollment and Reductions in Low-Income, Uninsured Children. GAO-06-457R. Washington, D.C.: March 9, 2006. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Medicaid and SCHIP: States’ Premium and Cost Sharing Requirements for Beneficiaries. GAO-04-491. Washington, D.C.: March 31, 2004. SCHIP: HHS Continues to Approve Waivers That Are Inconsistent with Program Goals. GAO-04-166R. Washington, D.C.: January 5, 2004. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care. GAO-03-222. Washington, D.C.: January 14, 2003. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Waiver Projects Raise Concerns. GAO-02-817. Washington, D.C.: July 12, 2002. Children’s Health Insurance: Inspector General Reviews Should Be Expanded to Further Inform the Congress. GAO-02-512. Washington, D.C.: March 29, 2002. Long-Term Care: Aging Baby Boom Generation Will Increase Demand and Burden on Federal and State Budgets. GAO-02-544T. Washington, D.C.: March 21, 2002. Children’s Health Insurance: SCHIP Enrollment and Expenditure Information. GAO-01-993R. Washington, D.C.: July 25, 2001. Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001. Medicaid and SCHIP: Comparisons of Outreach, Enrollment Practices, and Benefits. GAO/HEHS-00-86. Washington, D.C.: April 14, 2000. Children’s Health Insurance Program: State Implementation Approaches Are Evolving. GAO/HEHS-99-65. Washington, D.C.: May 14, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In August 1997, Congress created the State Children's Health Insurance Program (SCHIP) with the goal of significantly reducing the number of low-income uninsured children, especially those who lived in families with incomes exceeding Medicaid eligibility requirements. Unlike Medicaid, SCHIP is not an entitlement to services for beneficiaries but a capped allotment to states. Congress provided a fixed amount--$40 billion from 1998 through 2007--to states with approved SCHIP plans. Funds are allocated to states annually. States have 3 years to use each year's allocation, after which unspent funds may be redistributed to states that have already spent all of that year's allocation. GAO's testimony addresses trends in SCHIP enrollment and the current composition of SCHIP programs across the states, states' spending experiences under SCHIP, and considerations GAO has identified for SCHIP reauthorization. GAO's testimony is based on its prior work; analysis of the Current Population Survey, a monthly survey conducted by the U.S. Census Bureau (2003-2005); information from states' annual SCHIP reports (2002-2005); and SCHIP enrollment and expenditure data from the Centers for Medicare & Medicaid Services (1998-2005). SCHIP enrollment increased rapidly during the program's early years but has stabilized over the past several years. As of fiscal year 2005, the latest year for which data were available, SCHIP covered approximately 6 million enrollees, including about 639,000 adults, with about 4.0 million enrollees in June of that year. States' SCHIP programs reflect the flexibility the statute allows in structuring approaches to providing health care coverage. As of July 2006, states had opted for the following from among their choices of program structures allowed: a separate child health program (18 states), an expansion of a state's Medicaid program (11), or a combination of the two (21). In addition, 41 states opted to cover children in families with incomes at 200 percent of the federal poverty level (FPL) or higher, with 7 of these states covering children in families with incomes at 300 percent of FPL or higher. Thirty-nine states required families to contribute to the cost of their children's care in SCHIP programs through a cost-sharing requirement, such as a premium or copayment; 11 states charged no cost-sharing. As of January 2007, GAO identified 15 states that had waivers in place to cover adults in their programs; these included parents of eligible Medicaid and SCHIP children, pregnant women, and childless adults. SCHIP spending was initially low, but now threatens to exceed available funding. Since 1998, some states have consistently spent more than their allotments, while others spent consistently less. States that earlier overspent their annual allotments over the 3-year period of availability could rely on other states' unspent SCHIP funds, which were redistributed to cover other states' excess expenditures. By fiscal year 2002, however, states' aggregate annual spending began to exceed annual allotments. As spending has grown, the pool of funds available for redistribution has shrunk. As a result, 18 states were projected to have "shortfalls" of SCHIP funds--meaning they had exhausted all available funds--in at least one of the final 3 years of the program. These 18 states were more likely than the 32 states without shortfalls to have a Medicaid component to their SCHIP programs, cover children across a broader range of income groups, and cover adults in their programs. To cover projected shortfalls faced by several states, Congress appropriated an additional $283 million for fiscal year 2006. SCHIP reauthorization occurs in the context of debate on broader national health care reform and competing budgetary priorities, highlighting the tension between the desire to provide affordable health insurance coverage to uninsured individuals, including low-income children, and the recognition of the growing strain of health care coverage on federal and state budgets. As Congress addresses reauthorization, issues to consider include (1) maintaining flexibility within the program without compromising the primary goal to cover children, (2) considering the program's financing strategy, including the financial sustainability of public commitments, and (3) assessing issues associated with equity, including better targeting SCHIP funds to achieve certain policy goals more consistently nationwide.
|
Metabolife 356, which claims to raise the body’s metabolism and help dieters lose weight while maintaining high energy levels, contains 32 ingredients, including ephedra, guarana (an herbal source of caffeine), bee pollen, and caffeine. The product label recommends that adults take one to two caplets two to three times per day or every 4 hours, not to exceed eight caplets per day. Warnings on the product label suggest that a health care professional be consulted by individuals who are using any other dietary supplement, prescription drug, or over-the-counter drug containing ephedrine alkaloids or who have, or have a family history of, any of 11 health conditions, including heart disease, high blood pressure, diabetes, recurrent headaches, and depression. The label also recommends that persons should not use the product for more than 12 weeks and that exceeding the recommended amount may cause serious adverse health effects including heart attack or stroke. Other possible side effects mentioned on the label include rapid heartbeat, dizziness, severe headache, and shortness of breath. The complete product label is in appendix II. The Dietary Supplement Health and Education Act of 1994 created a framework for FDA’s regulation of dietary supplements as part of its oversight of food safety. Dietary supplements are generally marketed without prior FDA review of their safety and effectiveness. Manufacturers of dietary supplements are responsible for ensuring the safety of the dietary supplements they sell. Therefore, FDA relies on voluntary reports of adverse events from consumers, health professionals, and others in its effort to oversee the safety of marketed dietary supplements. Although there are no adverse event reporting requirements for manufacturers of dietary supplements, there are such requirements for many other products regulated by FDA. Various types of adverse events associated with the use of human drugs and biologics, animal drugs, animal feeds containing animal drugs, medical devices, infant formulas, and radiation-emitting devices must be reported to FDA. In addition to dietary supplements, other products regulated by FDA that do not require adverse event reporting are foods, cosmetics, and color additives. (See app. III for details about adverse event reporting requirements.) Voluntary adverse event reporting systems can be valuable tools for identifying potentially serious health issues that may be associated with the use of a product and for maintaining ongoing surveillance. FDA has used adverse event reports to identify issues for further investigation and, as we previously reported, it has used adverse event reports to help identify dietary supplements for which evidence of harm existed, and has issued warnings and alerts for dietary supplements. However, by themselves, adverse event reporting systems generally are not sufficient to establish that a product caused the reported health problem. As we noted in 1999, all voluntary surveillance systems, including FDA’s adverse event reporting system, have certain weaknesses. These include underreporting, reporting biases, difficulties estimating population exposure, and poor report quality. For example, the Department of Health and Human Services (HHS) Inspector General reported that a study commissioned by FDA estimated that FDA receives reports for less than 1 percent of adverse events associated with dietary supplements. In addition, it is often difficult to rule out other possible explanations for the event; for example, the event may have been caused by preexisting medical conditions, or by the concurrent use of prescription drugs, over- the-counter drugs, or other supplements. For these reasons, data from adverse event reports alone cannot be used to determine if the occurrence of a symptom among product users is unusually high. Between August and December 2002, Metabolife International released copies of 15,948 pages of documents that it said contained call records that reported adverse events associated with Metabolife 356 that the company had received from May 1997 through July 2002. Some pages of call records contained information about more than one call while others did not contain reports of adverse events. Some pages were photocopies or duplicates of other pages. The information about reported adverse events in the 14,684 health-related call records we examined was limited. Most of the call records we reviewed did not completely record demographic or medical history information about the consumer. Information about age, sex, weight, height, the amount of product used, and the duration of use was frequently not recorded. Handwritten call records were difficult to read and interpret. Information was often inconsistent across different versions of the same call record. The call records contained limited information about reported adverse events and consumers. In some cases the evidence for a report of an adverse event was a single health-related word on the call record, such as “seizure” or “stroke.” In addition, demographic and medical history information was not consistently recorded in the call records. Most of the call records we reviewed did not record information about the consumers’ sex, age, weight, or height. Eighty-eight percent of the call records did not record at least one of these variables. In addition, information about the amount of Metabolife 356 used and the duration of use was not recorded in 27 and 33 percent of the call records, respectively. (See table 1.) The absence of this information makes it difficult to assess whether the call records represent a signal of health concerns related to the consumption of Metabolife 356. Both the amount of product used and duration of use were recorded for 60 percent of the calls reporting adverse events. Relatively few of these records involved consumers who reported taking too much Metabolife 356 or using it for too long a period. Specifically, among call records containing information on the amount of product used or duration of use, 99 and 91 percent of consumers, respectively, reported using the product within the guidelines recommended on the label. The format of the call records varied from brief handwritten notes to typed notations to printed versions of a form used by Metabolife International. In general, less information was recorded for the one-third of call records that were handwritten than all other types of records. For example, calls recorded on a typed form more frequently recorded additional information such as recommendations by Metabolife International to discontinue Metabolife 356 (62 percent) or contact a doctor (54 percent) than did those on handwritten forms (13 percent and 8 percent, respectively). Further, it was often difficult to read handwritten call records. We could not always determine how many calls were reported on a single page since there was rarely a clear delineation of events. Because handwritten call records did not follow a template, we were unable to determine if some information was medical history or symptom information, or if a number was a weight, heart rate, or blood pressure. Information in call records was sometimes inconsistent. Where duplicate call records were available, information about consumers and their usage of Metabolife 356 was sometimes presented differently in the different records of the same consumer call. In addition, Metabolife International officials told us that its nurses sometimes used several different terms to document the same type of adverse event. We found that 14,684 of the Metabolife International call records reported at least one adverse event. Ninety-two of these were for the serious adverse events identified in the proposed label warning for dietary supplements containing ephedra that FDA announced on February 28, 2003. Other adverse events reported included significant elevation of blood pressure, abnormal heart rhythm, loss of consciousness, and systemic rash. We cannot establish that any of the reported adverse events were caused by the use of Metabolife 356. We counted 92 reports of heart attack, seizure, stroke, or death—the serious adverse events identified in FDA’s proposed label warning for dietary supplements containing ephedra (see table 2). In its 1997 proposed rule on dietary supplements, FDA also identified other types of adverse events as serious or potentially serious. Table 3 shows our counts for almost all such events. The serious and potentially serious types of adverse events described in FDA’s June 4, 1997, proposed rule were reported to the agency prior to June 7, 1996. FDA officials report that some other types of adverse events not included in the table may be considered serious or potentially serious but had not been reported to FDA during the time period considered by its proposed rule. In addition, the 14,684 call records with health-related reports presented a broad range of types of adverse events. Many of the call records contained reports of jitters, insomnia, hair loss, bruising, menstrual irregularities, and sexual dysfunction, as well as vague references to events such as “side effect” or “felt sick.” Some reported blood in stool, blood in urine, or blood clots. There were also some reports of visits to emergency departments and hospital admissions. Some call records contained reports of diseases such as pulmonary embolus (a blockage of an artery in the lungs), multiple myeloma, and inflammation of heart tissue. We cannot establish that any of the adverse events reported in the Metabolife International call records were caused by the use of Metabolife 356. As we noted earlier, adverse event reports by themselves are generally not sufficient to establish that a health problem was caused by the use of a particular product. For example, for many adverse event reports it is difficult to rule out other possible explanations for the event— the event may have been caused by preexisting medical conditions, or by the concurrent use of prescription drugs, over-the-counter drugs, or other dietary supplements. In addition, the limited information available in the Metabolife International call records means that we cannot confirm that a particular adverse event occurred, much less identify a specific cause for it. All the reviews of the Metabolife International call records, including ours, counted reports of serious adverse events. None of the reviews reported identical tabulations of these events. For the set of adverse events that Metabolife International counted—heart attack, stroke, seizure, death, and cardiac arrest—our counts are similar to those of the other reviews (see table 4). In total, we counted 96 such events, Metabolife International counted 78, and the counts of the other reviews ranged from 65 to 107. There are several possible reasons for the slightly different counts of serious adverse events in the different reviews. First, the call records themselves are often difficult to understand and interpret. Second, not all of the reviews included the same set of call records, both because some were completed before all of the Metabolife International call records were released and because the reviews adopted different procedures for identifying and discarding duplicate records. Third, the reviews used different definitions of particular events or established different thresholds for categorizing a particular event. For example, we included reports of “convulsions” in our count of seizures, while some other reviews may not have. Specifically, the counts we report in table 4 for our review and the reviews by Metabolife International and Karch include reports of convulsions, while it is not clear if the other reviewers’ counts did. Similarly, we did not count as a report of a heart attack a call record that reported “heart attack?”, while at least one other review did. We provided a draft of this report to FDA and Metabolife International for their review. FDA asked us to clarify that it has not conducted its own review of the Metabolife International call records, that we only reviewed reports of adverse events contained in the Metabolife International call records, and that we did not review other reports of adverse events among users of Metabolife 356 that have been received by FDA. In addition, FDA pointed out that, when combined with other information, adverse event reports can help establish that an adverse event was caused by a particular health product. FDA’s comments are included as appendix IV. FDA also provided technical comments, which we incorporated as appropriate. In its comments, Metabolife International was primarily concerned about our use of the term “adverse events” to describe the health-related complaints that were reported in the call records we reviewed. We believe that our use of the term is accurate and consistent with its use by FDA and others in the field. Metabolife International also wanted us to clarify that, while it did identify some call records as containing references to types of specific adverse events that have been categorized as serious by others, it has not identified any call records as reporting “serious adverse events.” We have made revisions so as not to imply that Metabolife International labeled these events as serious adverse events. Metabolife International also made other comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to the Secretary of HHS, the Commissioner of FDA, and others who are interested. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7119. Another contact and major contributors to this report are listed in appendix V. We reviewed call records and supplementary information voluntarily provided to us by Metabolife International to (1) determine the extent to which information was comprehensive, interpretable, and consistently recorded in the call records, and (2) count the number of call records reporting health-related problems associated with Metabolife 356, and how many of them were serious. During our review we removed duplicate call records and records that did not report health-related events. For each record we recorded demographic information about the individual consumer, other details about the call record and the consumer, and categorized the reported events. From August 2002 through December 2002, Metabolife International voluntarily provided to us 15,948 pages of documentation relating to reports of adverse events among consumers of Metabolife 356. Most of these records were from calls made to the company’s consumer health information phone line from May 1997 through July 2, 2002. Other records included e-mail messages and letters that had been sent to the company. Nurses on the staff of Metabolife International documented the calls to the consumer HealthLine in a variety of formats. The records included handwritten notes on a page, typed and handwritten letters, forms with handwritten entries, e-mails, and printed versions of records that had been entered into a database developed by Metabolife International. Many kinds of forms were used to record calls, ranging from simple forms with few spaces or check boxes to full-page forms with multiple boxes for consumer and event-related information. Metabolife International officials told us that health complaints that were noted on product return forms that it received were not in the call records provided to us. Metabolife International also provided to us copies of 46 redacted medical records and a list of corresponding call records. After reviewing these records we found 8 that were not associated with other call records. Five of these records contained enough information to determine the nature of the adverse event and were coded in the same way as other call records. The other medical records were used as additional sources of information for documenting the events and consumer information reported in their corresponding records. While most pages of call records contained information about a single call, some included information about multiple calls on the same page, other calls spanned multiple pages, and some did not include any report of adverse events. Records that spanned multiple pages were often letters to the company, some of which were sent with additional information (such as medical bills). Records that did not report an adverse event were either incomplete printouts of other records from the database, product questions, complaints about not losing weight, or reports of consumer satisfaction. As a result, the number of pages of call records that we received from Metabolife International does not correspond to the number of reports of adverse events. The call records and medical records we received were redacted by Metabolife International to remove personal identifying information such as name, phone number, address, fax number, and e-mail address to protect consumer privacy. Metabolife International officials told us that in the process of redacting the records, some relevant adverse event information was also inadvertently removed. Metabolife International officials told us that there were duplicate call records in the set of call records they provided to us. Some duplicate reports were photocopies of the same call record. In other cases, there were multiple versions of the same call record in different formats. Metabolife International officials reported these multiple versions were the result of nurses taking handwritten notes and later entering the same information directly into a database established in September 1999. Metabolife International gave us lists of those call records it believed to be duplicates. Over the course of our review, it identified more than 2,200 records for which there were at least one duplicate. Metabolife International officials reported that they identified the duplicates on the basis of the name of the consumer. Duplicates may have included subsequent calls about different events from the same individual. We examined the duplicate call records identified in the lists provided throughout our review by Metabolife International. Because identifying information was removed, we examined the date of the call record, demographic information about the consumer (such as age, height, weight, the amount of the product used, and duration of use), and event details to determine if they were duplicate records. Where this information was the same or similar, we considered the records to be duplicates and excluded the extra records from our review. We did, however, include in our analysis any additional information that appeared on the duplicate records. For example, if one version included height and another weight, we recorded both of these. We agreed with Metabolife International that most of the more than 2,200 records it identified as duplicates were, in fact, duplicates. However, we did not exclude records that represented multiple calls from the same consumer for different events if the dates on the call records differed by more than a few days or the symptoms were clearly different. During the course of our review, we also identified duplicates not previously identified by Metabolife International, including photocopied records and records that used identical language in event descriptions. We do not know if all duplicate call records were identified. We also excluded from our analysis records in which there was no health complaint or the health complaint could not be clearly determined. We also excluded call records that reported third-hand knowledge of adverse events (such as a friend of a friend who experienced an adverse event). In addition, we did not count call records that clearly referred to nutrition bars or other ephedra-free products manufactured by Metabolife International. In total, we determined that the 15,948 pages of documentation provided by Metabolife International contained 14,684 separate health-related call records. We classified the adverse events reported in each call record and entered the appropriate codes into a database. We classified the reported adverse events as either one of the events FDA identified as serious in its February 28, 2003, announcement regarding a proposed label warning for dietary supplements containing ephedra (heart attack, stroke, seizure, or death) or as an other adverse event. All serious events reported within a particular call record were counted. Therefore an individual could have reported multiple serious adverse events, though this happened in few records. For other adverse events, we documented whether the call record reported one or more adverse events. We did not count the number of reports for every type of event reported in the record. We did, however, count the number of all but 1 of the 24 other types of adverse events that were described as serious or potentially serious in FDA’s June 4, 1997, proposed rule on dietary supplements containing ephedrine alkaloids. The set of events identified by FDA in the proposed rule is not an exhaustive list of the adverse events that may be associated with the use of dietary supplements containing ephedrine alkaloids. FDA officials told us that some other types of adverse events may be considered serious or potentially serious but had not yet been reported to FDA during the time period considered by its proposed rule. We did not apply medical judgment in the process of identifying and classifying events. Our classification of events in the call records was based solely on the words and phrases therein; we did not diagnose a consumer’s condition or otherwise interpret the information presented. For example, if a report said “poss. heart attack,” “heart attack symptoms,” or “heart attack?”, we did not classify it as a heart attack since it was not clear that a heart attack was reported. Also, while we counted “blood pressure 210/120” as an instance of significantly elevated blood pressure because it reported measurements greater than 160 systolic or 100 diastolic, we did not place in the same category call records that reported only “high blood pressure” because they did not contain the specific measurements needed for that determination. We used MEDLINE Plus Medical Encyclopedia definitions to further clarify individual symptoms related to these categories. We also did not attempt to determine whether Metabolife 356 caused the reported adverse events. Adverse events about many types of products regulated by FDA are required to be reported to the agency. Such products include human drugs, biologics, animal drugs, animal feeds containing animal drugs, medical devices, infant formulas, and radiation-emitting devices. There are, however, no reporting requirements for adverse events associated with other products regulated by FDA, including food and food additives, dietary supplements, cosmetics, or color additives. (See table 5.) Carolyn Feis Korman, Chad Davenport, Julian Klazkin, and Roseanne Price also made major contributions to this report.
|
Dietary supplements containing ephedra, such as Metabolife 356, have been associated with serious adverse health-related events. In a February 28, 2003, announcement, the Food and Drug Administration (FDA) proposed that dietary supplements containing ephedra include a statement on their label warning that "Heart attack, stroke, seizure, and death have been reported after consumption of ephedrine alkaloids." GAO was asked to review health-related call records that Metabolife International--the manufacturer of Metabolife 356--collected from consumers from May 1997 through July 2002. Most of the records were from calls to a consumer phone line the company maintained. Metabolife International voluntarily provided the call records to GAO. Specifically, GAO (1) examined the extent to which consumer information in the call records was comprehensive, interpretable, and consistently recorded, (2) counted the number of call records reporting types of adverse events that FDA had identified in 1997 as serious or potentially serious, and (3) compared GAO's findings to those of six other reviews of the call records, including one by Metabolife International. Adverse event reports generally are not sufficient on their own to establish that reported problems are caused by the use of a particular product, but can signal potential health problems that deserve investigation. The information in the Metabolife International call records was limited. Call records were sometimes difficult to understand, and consumer information was not consistently recorded. In some cases, the evidence for a report of an adverse event was limited to a single word on the record. Most call records also did not record complete information about potentially relevant items such as the consumer's age, sex, weight, and height. Information about both the amount of product used and the duration of use was recorded for 60 percent of the call records. Handwritten call records were difficult to read and understand. By GAO's categorization, 14,684 of the call records contained reports of at least one adverse event. GAO found that there were 92 reports of the serious adverse events identified in FDA's proposed label warning--18 reported heart attacks, 26 reported strokes, 43 reported seizures, and 5 reported deaths. Other types of adverse events identified as serious or potentially serious by FDA in 1997 that were reported in the call records included significant elevation in blood pressure, abnormal heart rhythm, loss of consciousness, and systemic rash. Because of the inherent limitations of adverse event reports and the incomplete nature of these call records, it can not be established from the information available to GAO that the adverse events reported were caused by Metabolife 356. All of the reviews of Metabolife International call records--one by Metabolife International; three by consultants commissioned by Metabolife International; one by the minority staff of the Committee on Government Reform, House of Representatives; one by the RAND Corporation; and one by GAO--found reports of serious adverse events, although none reported identical results. For the set of adverse events counted by Metabolife International--heart attack, stroke, seizure, death, and cardiac arrest--GAO's counts were similar to those of the other reviews. GAO counted 96 such reports and the counts of the other reviews ranged from 65 to 107. In commenting on a draft of this report, FDA discussed the value of reports of adverse events in helping to understand the causes of such events.
|
Recently, a body of research has shown that quality teachers are significant to improving student performance. For example, a 1996 study by Sanders and Rivers examined the effect of teacher quality on academic achievement and found that children assigned to effective teachers scored significantly higher in math than children assigned to ineffective teachers. Research has also shown that many teachers, especially those in high- poverty and rural districts, were not certified and lacked knowledge of the subjects they taught. For example, a report from The Education Trust found that in every subject area, students in high-poverty schools were more likely than other students to be taught by teachers without even a minor in the subjects they teach. States are responsible for developing and administering their education systems and most have delegated authority for operating schools to local governments. States and local governments provide most of the money for public elementary and secondary education. In 2002, Education reported that 49 percent of the revenue for education was from state sources, 44 percent from local sources, and 7 percent from federal sources. Therefore, it is mostly state and local funds that are used to cover most of the major expenses, such as teacher salaries, school buildings, and transportation. Although the autonomy of districts varies, states are responsible for monitoring and assisting their districts that, in turn, monitor and assist their schools. The federal government plays a limited but important role in education. The Department of Education’s mission is to ensure equal access to education and promote educational excellence throughout the nation by, among other things, supporting state and local educational improvement efforts, gathering statistics and conducting research, and helping to make education a national priority. Education provides assistance to help states understand the provisions or requirements of applicable laws, as well as overseeing and monitoring how states implement them. With the passage of the No Child Left Behind Act, on January 8, 2002, the federal government intensified its focus on teacher quality by establishing a requirement in the act for teachers across the nation to be “highly qualified” in every core subject they teach by the end of the 2005-06 school year. While the act contains specific criteria for highly qualified teachers by grade and experience levels, in general, the act requires that teachers: (1) have a bachelor’s degree, (2) have state certification, and (3) demonstrate subject area knowledge for each core subject they teach. Table 1 lists the specific criteria by grade and experience levels as defined in the act. For Title II, Part A of the act, Congress appropriated $2.85 billion to the Teacher and Principal Training and Recruiting Fund in fiscal year 2002—about $740 million more than states received in fiscal year 2001 under the previous two programs that it replaced—the Eisenhower Professional Development and Class Size Reduction programs. The purpose of the fund is to increase student academic achievement by providing support for states and districts to implement authorized activities cited in Title II to help them meet the requirement for highly qualified teachers. (See apps. II and III for state and district authorized activities.) States had to complete an application in order to receive funds. All applications were due by June 2002, and states received the funds by August 2002. The funds were to be distributed according to the formula defined in the act. Specifically, states and districts received an amount equal to what they received for fiscal year 2001 under the two previous programs. The additional $740 million was distributed to states and districts based on the number of families with children ages 5 to 17 who had incomes below the poverty threshold and the relative population of children ages 5 to 17. The act requires states to ensure that districts target funds to those schools that have the highest number of teachers who are not highly qualified, the largest class sizes, or have been identified as in need of improvement. To help states understand and implement the new law, Education took a number of actions. The department established a Web site, developed an application package for the formula grant program, issued draft guidance, and held informational conferences for states and districts. Figure 1 summarizes Education’s assistance to states. In June 2002, Education issued draft guidance entitled “Improving Teacher Quality State Grants” which has served as Education’s principle form of assistance to states. In December of 2002, Education expanded and modified the draft guidance and issued final regulations on NCLBA that included some criteria related to the requirement for highly qualified teachers. Education does not plan to issue a final version of its draft guidance; instead, the draft includes the statement that it “should be viewed as a living document” that will be updated (1) as new questions arise, (2) if there is a change in the program statute that requires modification, or (3) when Education determines that more information would be helpful. In-depth discussions with officials in 8 states revealed that they could not determine the number of highly qualified teachers with accuracy because of one or more factors. All state officials said they did not know the criteria for some of their teachers because Education’s draft guidance changed and was not complete. Officials also did not have all the information they needed to develop methods to evaluate subject area knowledge for their current teachers. Accordingly, officials in all of the states interviewed and nearly all surveyed said they needed complete and clear guidance before they could comply with the law. Most of the states we visited also did not have data systems that could track teacher qualifications by core subject taught, which they would have to do to ensure that teachers were teaching only those subjects for which they had demonstrated subject area knowledge. Finally, many state officials we visited were reluctant to say that their certified teachers might not be highly qualified. During our review, Education changed its criteria for teachers who were in alternative certification programs and it reissued the draft guidance to qualify only teachers in certain programs. The revised draft guidance stated that only those teachers enrolled in alternative certification programs with specific elements, such as teacher mentors, would be considered highly qualified. As a result, state officials had to recount this group of teachers by determining which alternative certification programs met the standard and then which teachers participated in those programs. In one state we visited, there were about 9,000 teachers in alternative certification programs and all were considered highly qualified until the revised draft guidance was issued. As of May 2003, an official said she was still trying to determine the number of teachers who were highly qualified. Also during our review, state officials were uncertain about the criteria for special education teachers. The draft guidance that was available during most of our visits did not address special education teachers. As a result, state officials could not know, for example, whether a special education teacher teaching math and reading would have to demonstrate subject area knowledge in both or neither of the subjects. For school year 1999-2000, special education teachers represented about 11 percent of the national teacher population, so that, on average, state officials were unable to determine whether at least a tenth of their teachers met the highly qualified criteria. In some districts, special education teachers represented a larger portion of the workforce. For example, in one high- poverty urban district that we visited, special education teachers were 21 percent of their teachers. Education issued final Title I regulations on December 2, 2002, with an appendix that discussed the highly qualified requirements for special education teachers, among other things. However, the requirements are not discussed in the federal regulations nor are they discussed in the Title II draft guidance that was issued December 19, 2002. In addition, as of March 2003 some officials still had questions about the requirements. Perhaps because the guidance was issued in an appendix, it was not given the prominence needed to ensure that all officials would be aware of the information. Furthermore, neither Education’s draft guidance nor its regulations provided more information than the law to help state officials develop methods other than tests to evaluate their current teachers’ subject area knowledge. The law allows states to use a “high, objective uniform state standard of evaluation” instead of a test. Education’s draft guidance repeated the language of the law, but provided no further interpretation. In addition, Education officials said they would review states’ implementation of this provision when they conduct compliance reviews and then determine if the state evaluation is in compliance with the law. State officials said they needed more information, such as examples, to be confident of what Education would consider adequate for compliance with the law. State officials prefer evaluations instead of tests, according to an official at the Council of Chief State School Officers (CCSSO), because they expect evaluations to be less expensive, more flexible, and more acceptable to teachers and unions. Such evaluations might be done through classroom observations, examination of portfolios, and peer reviews. In March 2003, CCSSO held a conference attended by about 25 state officials and several Education officials to discuss the implementation of state evaluations. At that conference, state officials said Education’s lack of specificity was particularly a problem for evaluating middle and high school teachers who had not demonstrated subject area knowledge. According to our survey data, 23 of 37 state officials said they would have difficulty fulfilling the highly qualified requirement for middle school teachers and 14 anticipated difficulty for high school teachers. According to district survey results, 20 percent anticipated difficulties in meeting the federal criteria for middle school teachers and 24 percent for high school teachers. Furthermore, as table 2 shows, a significantly higher percentage of high-poverty districts reported they would have greater difficulty fulfilling the requirement for teachers, especially at the middle and high school levels, than would low-poverty districts. State officials from the 8 states we visited said they could not determine the number of highly qualified teachers because the draft guidance was changing, not clear, or incomplete. Most, 32 of 37, state officials responding to our survey said they needed clear and timely guidance to help them meet the law. Officials from 7 of the 8 states we visited told us they did not have data systems that would allow them to track teachers’ qualifications according to the federal criteria by every subject taught. Officials in one state projected that it would take at least 2 years before the state could develop and implement a system to track teachers by the federal criteria. State officials we visited said since their state certifications had not required some teachers to demonstrate subject area knowledge as required in the federal criteria, their information systems did not track such information. In written comments to our survey, for example, one official said, “Questions are impossible to answer at this point because we not have finished the identification of those who need to be tested or evaluated.” Another respondent wrote that the data system “was designed years ago for state certification purposes… has not yet been updated to include all NCLBA criteria for teachers.” Other state officials also told us during our visits and through survey comments that their state certifications did not always require teachers to demonstrate subject area knowledge, so they did not have information on many teachers’ qualifications for this criteria. Another state official wrote, “ do not have data on teachers who were grand fathered in before 1991 or from out of state… who do not have subject matter competency.” Given the cost and time they thought it would take, some state officials expressed reservations about changing their data systems before Education provided complete guidance. Officials in 6 of the 8 states visited were reluctant to report their certified teachers might not be highly qualified. Three of these officials equated their state certification with the federal criteria for a highly qualified teacher even though they differed. They expressed a reluctance to say that their state certification requirements did not produce a highly qualified teacher even though the requirements did not match all the federal criteria, such as demonstration of subject area knowledge. Additionally, state officials expressed concern about the morale of teachers who are state certified but who would not meet the federal criteria. They were also concerned about how teachers and unions would react to testing already certified teachers. For example, in 5 states we visited officials told us that the unions in these states objected to the testing of certified teachers. Many state officials responding to our survey reported that teacher salary issues and teacher shortages were hindrances. State officials also identified other conditions such as few programs to support new teachers, lack of principal leadership, teacher training, and union agreements. District officials also cited teacher salary and teacher development issues as conditions that hindered them. Our district survey also shows that significantly more high-poverty districts reported some conditions as hindrances than low-poverty districts, and rural districts officials we visited cited hindrances specific to their small size and isolated locations. In our state survey, officials indicated that they needed more information from Education on professional development programs, best practices, and developing incentives for teachers to teach in high-poverty schools. Many state officials responding to our survey reported that pay issues hindered their ability to meet the requirement to have all highly qualified teachers. These issues included low salaries, lack of incentive pay programs, and a lack of career ladders for teachers. For example, 32 of 37 state respondents said low teachers’ salaries compared to other occupations was a hindrance. Officials we visited said that because of the low salaries it has been more difficult to recruit and retain some highly qualified teachers, especially math and science teachers. Several occupations are open to people with a bachelor’s degree in math and science, such as computer scientists and geologists. During the late 1990s, there was an increase in demand for workers with math and science backgrounds, especially in information technology occupations. Between 1994 and 2001, the number of workers employed in the mathematical and computer sciences increased by about 77 percent while the number of teachers increased by about 28 percent and total employment increased by about 14 percent. Furthermore, the math and science occupations have generally paid higher salaries than teaching positions. The U.S. Department of Labor’s Bureau of Labor Statistics data indicate that in 2001 average weekly earnings was $1,074 for mathematical and computer scientist positions and $730 for teachers. Some research shows that teacher salary is only one of many factors that influence teacher recruitment and retention. For example, the American Association of School Administrators explained the relationship between pay and working conditions in a report on higher pay in hard-to-staff schools. The report stated “How money matters becomes much clearer if salary is viewed as just one of many factors that employees weigh when assessing the relative attractiveness of any particular job, such as opportunities for advancement, difficulty of the job, physical working conditions, length of commute, flexibility of working hours, and demands on personal time. Adjusting the salaries upward can compensate for less appealing aspects of a job; conversely, improving the relative attractiveness of jobs can compensate for lower salaries.” Many state survey respondents also cited teacher shortages as a hindrance. Specifically, 23 of the 37 state officials reported teacher shortages in high-need subject areas—such as, math, science, and special education. Additionally, 12 state officials reported a shortage in the number of new highly qualified teachers in subject areas that are not high need, and 12 reported that having few alternative certification programs hindered their efforts. Education experts have debated the causes and effects of teacher shortages. Some experts argue that the problem is not in the number of teachers in the pool of applicants but in their distribution across the country. Others argue that poor retention is the real cause of teacher shortages. As for alternative certification programs, they were established to help overcome teacher shortages by offering other avenues for people to enter the teaching profession. However, in 1 state we visited officials said the success of these programs had been mixed because the content and length of the programs varied and some alternative certification teachers were better prepared than others. Although states have been facing teacher shortages in some subject areas for years, the new requirement for highly qualified teachers could make it even more difficult to fulfill the demand for teachers. The new law requires states to ensure that teachers only teach subjects for which they have taken a rigorous state test or evaluation, completed an academic major or graduate degree, finished course work equivalent to such degrees, or obtained advanced certification or credentialing in the subjects. Previously, states allowed teachers to teach subjects without such course work or credentials. From its Schools and Staffing Survey, the National Center for Education Statistics, within the Department of Education, reported that in 1999-2000, 14 to 22 percent of students in middle grades and 5 to 10 percent of high school students taking English, math, and science were in classes taught by teachers without a major, minor, or certification in the subjects they taught. Also, the report indicated that in the high school grades, 17 percent of students enrolled in physics and 36 percent enrolled in geology/earth/space science classes were taught by out-of-field teachers. Some states also cited several other conditions that might hinder their ability to meet the requirement for highly qualified teachers. For example, 13 of the 37 state respondents reported few programs to support new teachers, and 9 reported large classes as hindrances. State respondents also cited work environment factors such as teacher performance assessments, a lack of principal leadership, and lack of school supplies and equipment as hindrances. See table 3 for more information on hindrances reported by state officials. Additionally, 7 state officials who responded to our survey cited union agreements as a hindrance. Officials in 5 states that we visited said that the teachers’ unions objected to testing currently certified teachers for subject area knowledge, and officials in 2 of these states also said that current teachers might leave rather than take a test. An official representing the American Federation of Teachers (AFT), an organization that represents teachers, school support staff, higher education faculty and staff, among others, said that AFT supports the federal definition for highly qualified teachers and incentive pay for teachers in high-need subject areas and that certified teachers should have a choice between taking a test and having a state evaluation to determine subject area knowledge. The National Education Association, an organization with members who work at every level of education, issued an analysis of the NCLBA that identified several changes it believes should be made in the law, including clarifying the requirement for highly qualified teachers. The union officials we spoke with from 2 states we visited said they also support the requirement for highly qualified teachers but expressed concerns about how their states would implement the legislation. One state union official said the current state process for certification requires multiple tests—more than is required in the legislation—and the union is concerned that the state will collapse the testing and streamline the teacher preparation process as part of its changes to meet the requirement. The union official from the other state said that his union was concerned because the state’s approach for implementing the requirement for highly qualified teachers has become a moving target and this causes frustration for teachers. School district estimates from our survey show that, similar to state respondents, salary issues hinder districts’ efforts to meet the requirement for highly qualified teachers. Almost 60 percent of district officials cited low teacher salaries compared to other occupations as a hindrance, with a significantly higher number of high-poverty than low-poverty district officials reporting this as a hindrance. During our site visits to 4 rural districts, officials said that their salaries could not compete with salaries offered in other occupations and locations. One official said that pay in the rural districts was low compared to teacher salaries in surrounding states. Both state and district officials also said that these salary conditions affect the recruitment and retention of highly qualified teachers. Our survey estimates also show that conditions related to teacher development were hindering districts’ ability to meet the highly qualified teacher requirement. The conditions reported by districts included (1) weak training for teachers in the use of technology (28 percent), (2) few alternative certification programs (18 percent), and (3) professional development programs that are not of sufficient duration to improve teacher quality (23 percent). Weak training programs can leave teachers unprepared to deal with all the challenges of teaching and lead to job dissatisfaction. Table 4 provides estimates of the percentages of districts reporting conditions that hinder their ability to meet the requirement for highly qualified teachers. While the ranking of most of the hindrances reported by districts and states were similar, three conditions were reported among the top third of hindrances for districts but among the bottom third for states. Specifically, these conditions were (1) alternative certification programs do not provide teachers with adequate teaching skills, (2) teacher preparation programs do not provide teachers with adequate subject matter expertise, and (3) training for teachers in the use of technology is weak. The first two of these conditions relate to programs that are usually responsibilities of the state departments of education. States or districts can address the third condition, technology training. These conditions indicate areas in which states and districts can work together to improve programs and help meet the requirement for highly qualified teachers. A significantly higher number of high-poverty districts than low-poverty districts identified some conditions as hindrances. As table 5 shows, in addition to teacher shortages and pay issues, a larger percentage of high- poverty districts cited few programs to support new teachers and few alternative certification programs, among others, as hindrances to meeting the requirement. During our site visits, officials from high-poverty districts told us they had great difficulty retaining teachers. For example, officials in one district said that although the district provided training for new teachers in the skills they needed, these teachers became more marketable after they completed the training and often left for higher paying teaching positions. According to these officials, the schools in this district did not always benefit from the district’s training programs. High-poverty district officials also said they could not compete with surrounding, wealthier districts in teacher pay. Officials in these districts and at the American Association of School Administrators also said that some unions do not support the use of incentive pay for high-poverty schools because they believe that salary scales should be equal for all schools within a district. Rural district officials we visited and also those who provided survey comments said they faced unusual hindrances because some of them were very small, isolated, or had only one or two teachers in total at some schools. During our site visits, some officials from rural districts also said that they were facing teacher shortages because not enough teachers were willing to teach in rural districts. For example, one official in a large, rural state said that the state had only one university, which makes it difficult for teachers to obtain further course work to meet the federal criteria for subject area knowledge. Since many teachers in this state’s rural districts had to teach more than one core subject, with limited access to subject area training, they may not meet the highly qualified criteria for all subjects they teach. One survey respondent also wrote, “Rural schools have to assign teachers to several subject areas at secondary level. We do not have large numbers of students, and teachers have to wear more than one hat. Rural schools are also a long way from colleges and to require licensure in every subject they teach is ludicrous.” In a 2001 report to Congress, Education estimated that 84 percent of 4-year institutions would offer distance education courses in 2002. Such courses may help address this hindrance. As districts work to address the conditions that affect their ability to meet the new federal requirement, they look to their state officials for guidance and technical assistance. In turn, states look to Education for help. Many of the hindrances that state and district officials reported related to conditions that they could address such as teachers’ salaries, the number of alternative certification programs, and certification requirements. However, states indicated they needed some additional information and assistance from Education. At least half of the 37 state respondents reported needing (1) information or other assistance to meet the requirement that professional development programs be based on recent scientific research and be of sufficient duration to have an effect on teacher quality, (2) information on best practices in the area of teacher quality, and (3) assistance in developing incentives for teachers to teach in high-poverty schools. Education’s 2002-07 strategic plan identifies several steps it will take to work with states. Specifically, the strategies listed under the plan’s goal for improving teacher and principal quality include supporting professional development in research-based instruction and encouraging innovative teacher compensation and accountability systems. Additionally, in December 2002, Education reorganized and established a new office to administer the Title II program. To help meet the requirement for highly qualified teachers, state officials planned to spend most of their Title II funds on professional development activities, and district officials planned to spend a majority of their Title II funds on recruitment and retention activities. State and district officials planned to spend much larger amounts of other federal, state, and local funds than Title II funds on the activities authorized in the act. Generally, state and district officials told us they were continuing activities from previous years. The survey data also indicated high-poverty districts relied more on Title II funds for recruitment and retention activities than low- poverty districts. In addition, while the act requires districts to target their Title II funds to schools that meet certain criteria, until district officials know the number of highly qualified teachers and where they are located, they cannot fully comply with this requirement. Generally, state educational agencies could use up to 2.5 percent of the state’s Title II funds for authorized state activities. Twenty-four state officials responding to our survey planned to spend about 65 percent of their Title II funds on professional development activities to develop and support highly qualified teachers and principals. For example, professional development activities could help teachers enhance their subject area knowledge and complete state licensing requirements to meet the criteria for highly qualified teachers. During our site visits, state officials described their professional development activities as seminars, conferences, and various instructional initiatives. For example, in one state we visited, officials planned to hold a workshop to provide middle and high school math teachers with technology training so that they could incorporate interactive Web sites in their instruction. Generally, state officials said they planned to use Title II funds to continue activities that were begun in previous years. While professional development activities were to receive the largest share of funds, survey results show state officials planned to also spend Title II funds on other activities cited in the act. Officials in 28 states planned to spend about 18 percent on technical assistance activities, such as providing information about the requirement for highly qualified teachers to districts via the state Web site. Certification activities received the smallest percentage of Title II funds–2 percent. These activities include efforts to promote certification reciprocity with other states and efforts to establish, expand, or improve alternative routes for certification. (See fig. 2.) State officials reported they planned to spend much larger amounts of other federal and state funds than Title II funds on nearly all of the authorized Title II activities. For example, states reported that 85 percent of the total funds they planned to spend on professional development activities would come from other federal and state funds. The one exception was technical assistance activities, where Title II funds accounted for 77 percent of the total. (See fig. 3.) Providing technical assistance to districts is an important role for states. In our visits to districts, several officials said they needed more information and technical assistance from their state to understand and implement the law. Districts received about 95 percent of their state’s Title II funds for authorized activities. Based on our survey, district officials planned to spend an estimated 66 percent of their Title II funds on recruitment and retention activities and 34 percent on activities related to professional development. Class size reduction activities were the largest funded recruitment and retention activity and accounted for 56 percent of total Title II funds. In a majority of our site visits we learned that district officials used these funds to hire additional highly qualified teachers to continue activities developed under the previous Class Size Reduction Program. Class size reduction activities may help improve teacher retention because, according to an Education report, teachers in small classes spend less time on classroom management and more time providing instruction, thus raising the teacher’s level of job satisfaction. While class size reduction activities can be seen as a retention tool, they may also increase the number of highly qualified teachers that need to be hired. This may be a problem for some districts and states. In fact, officials in one large state we visited said class size reduction activities presented a challenge by increasing the number of classes not being taught by a highly qualified teacher. Additionally, district officials in our site visits said that they implemented or planned to implement a broad range of professional development activities. For example, one district had a teacher-coach program for its math and science teachers. This program used senior teachers as full-time coaches to assist less experienced teachers with instructional strategies and curriculum preparation. Other programs focused on math and reading, varied instructional strategies for different types of students, and use of technology. District officials in our site visits said most activities were in place prior to the act. While all districts spent more on recruitment and retention activities than professional development, there were differences between high- and low- poverty districts. From our survey, we estimate that high-poverty districts planned to spend a significantly larger percentage of Title II funds on recruitment and retention and a smaller percentage on professional development activities than low-poverty districts. (See table 6.) From our survey, we estimated all districts planned to spend much larger percentages of other federal, state, and local funds than Title II funds on authorized activities but in high-poverty districts the share of the funds was lower. Overall, 80 percent of the total funds districts planned to spend on professional development activities came from other federal, state, and local funds. Title II funds represented a larger percentage of total funds spent on authorized activities for high-poverty districts than low-poverty districts. For example, in high-poverty districts Title II funds were 48 percent of the funds they planned to spend for recruitment and retention activities compared to 15 percent in low-poverty districts. There may be several reasons for these differences. For example, Title II allocated more funds to those districts with more high-poverty families, and low-poverty districts may have had more local funds to contribute to the total. Figure 4 shows the Title II percentage of total funds for professional development activities and recruitment and retention activities, for all, high-poverty, and low-poverty districts. A majority of district officials said they planned to fund activities that were begun in previous years. We estimated about one-third of all districts (34 percent) were targeting their Title II funds as required by the act. The act requires districts to target funds to those schools (1) with the highest number of teachers who are not highly qualified, (2) with the largest class sizes, or (3) in need of improvement. There was little difference between the percentages of high- and low-poverty districts that targeted their funds or between urban and rural districts. For example, 29 percent of high-poverty districts and 30 percent of low-poverty districts reported targeting some of their Title II funds. Additionally, some district officials we visited said they did not target funds according to the criteria listed in the act but that they targeted funds in other ways such as to support math and science programs for teachers and for administrative leadership programs. It may be too early for district officials to fully implement this targeting requirement. Until they know the true number of teachers who are highly qualified, they cannot target the schools with the highest numbers of teachers who are not highly qualified. Education officials have had to interpret and help states implement many new requirements established by the NCLBA, including the highly qualified teacher requirement. During this first year of implementation, state officials were still determining how they could assess whether their teachers met all the criteria and identifying steps they needed to take to meet the new requirement. Generally, state and district officials continued to be challenged by many longstanding hindrances and they continued to fund activities from previous years. Education issued regulations and draft guidance to help states begin to implement the requirement for highly qualified teachers and has plans to help states with some of their challenges. However, state officials need more assistance from Education, especially about methods to evaluate current teachers’ subject area knowledge. Without this information state officials are unsure how to assess whether their current teachers meet the highly qualified requirement. This would also help them accurately determine the number of teachers who are highly qualified and take appropriate steps, such as deciding on which activities to spend Title II funds and targeting Title II funds to schools with the highest numbers of teachers who are not highly qualified. It is important that states have the information they need as soon as possible in order to take all necessary actions to ensure that all teachers are highly qualified by the 2005-06 deadline. In order to assist states’ efforts to determine the number of highly qualified teachers they have and the actions they need to take to meet the requirement for highly qualified teachers by the end of the 2005-06 school year, we recommend that the Secretary of Education provide more information to states. Specifically, information is needed about methods to evaluate subject area knowledge of current teachers. We received written comments on a draft of this report from Education. These comments are reprinted in appendix IV. In response to our recommendation related to requirements for special education teachers, Education stated that the appendix of the Title I Final Regulations clarifies how the highly qualified requirements apply to special education teachers. Consequently, we modified the report to reflect this information and we withdrew this recommendation. Education indicated it plans to take steps to address our recommendation on the need for information about methods to evaluate subject area knowledge of current teachers. Education stated that it will continue to work with state officials and will actively share promising strategies and models for “high objective uniform State standard of evaluation” with states to help them develop ways for teachers to demonstrate subject area competency. Also, Education commented that it views a “one–size fits all” approach to addressing many of the issues raised in the report as undesirable because states and districts will have to meet the requirement highly qualified teachers in a manner that is compatible with their teacher certification, assessment and data collection processes. Education stated that it will provide assistance wherever possible to help states meet the requirement. We generally agree that this is an appropriate approach. Additionally, Education provided technical comments and we made changes as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. Copies will be made available to other interested parties upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please call me at (202) 512-7215. Key contributors are listed in appendix V. In conducting our work, we administered a Web survey to the 50 states and the District of Columbia, and a separate Web survey to a nationally representative sample of 830 school districts, that included strata for high- poverty, low-poverty, rural, and urban districts. The response rate for the state survey was 71 percent and for the district survey 62 percent. The surveys were conducted between December 4, 2002, and April 4, 2003. We analyzed the survey data and identified significant results. See figure 5 for a geographic display of responding and nonresponding states. The study population for the district survey consisted of public school districts contained in the Department of Education’s Core of Common Data (CCD) Local Education Agency (LEA) file for the 2000-2001 school year. From this, we identified a population of 14,503 school districts in the 50 states and the District of Columbia. Sample Design. The sample design for this survey was a stratified sample of 830 LEAs in the study population. This sample included the 100 largest districts and a stratified sample of the remaining districts with strata defined by community type (city, urban, and rural) and by the district’s poverty level. Table 7 summarizes the population, sample sizes, and response rates by stratum. Estimates. All estimates produced from the district sample in this report are for a target population defined as all public school districts in the 50 states and the District of Columbia for the 2002-03 school year. Estimates to this target population were formed by weighting the survey data to account for both the sample design and the response rates for each stratum. For our estimates of high- and low-poverty districts, we defined high-poverty districts as those with participation rates in the free and reduced meals program of 70 percent or above. Low-poverty districts were defined as those with free and reduced meals program rates at 30 percent and below. One of the advantages of this approach was that it allowed for a sufficient number of cases in each category to conduct statistical analyses. Sampling Error. Because we surveyed a sample of school districts, our results are estimates of a population of school districts and thus are subject to sampling errors that are associated with samples of this size and type. Our confidence in the precision of the results from this sample is expressed in 95 percent confidence intervals. The 95 percent confidence intervals are expected to include the actual results for 95 percent of the samples of this type. We calculated confidence intervals for our study results using methods that are appropriate for a stratified, probability sample. For the percentages presented in this report, we are 95 percent confident that the results we would have obtained if we had studied the entire study population are within plus or minus 10 percentage points of our results, unless otherwise noted. For example, we estimate that 34 percent of the districts target at least some funds to specific types of schools. The 95 percent confidence interval for this estimate would be no wider than plus or minus 10 percent, or from 24 percent to 44 percent. Nonsampling Error. In addition to these sampling errors, the practical difficulties in conducting surveys of this type may introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, the respondents’ answers may differ from those of districts that did not respond, or errors could be made in keying questionnaire data. We took several steps to reduce these errors. To minimize some of these errors, the state and district questionnaires were each pretested three times to ensure that respondents would understand the questions and that answers could be provided. To increase the response rate, sampled districts received two calls encouraging them to complete and return the questionnaire. We also performed an analysis to determine whether some sample-based estimates compared favorably with known population values. We performed this analysis for 12 estimates providing information on students, teachers, number of schools, and administrators that covered major segments those groups. For example, we did an analysis on all full- time equivalent classroom teachers but not on teachers of ungraded students, which is a very small proportion of all teachers. We used these values for the 511 sample respondents to produce sample estimates to the total population of all 14,503 districts. These estimated values, their associated 95 percent confidence intervals, and their true population values are presented in table 8. For 11 out of the 12 estimates we examined, the population value falls within the 95 percent confidence interval for the estimate, thus providing some indication that respondents to this survey reflect the 12 characteristics we examined in the population. Although these characteristics were selected because they might be related to other characteristics of district teachers and district administration, we do not know the extent to which the survey respondents would reflect the population characteristics for the specific questions asked on our survey. For example, we are not certain whether districts responding to the survey were further along in the implementation of Title II requirements than the districts that did not respond. Our sample was not designed to produce geographical area estimates, and we did not explicitly stratify our sample by state or region. However, our sample was selected nationally and all regions are represented in our sample. The following table summarizes sample size and responses for 10 regions. On the basis of the national distribution of our sample and on the result of our comparison of a set of survey estimates to known population values from the CCD file, we chose to include the survey results in our report and to produce sample based estimates to the total population of school districts in our study population. We chose not to report the survey responses to questions asking about the number of highly qualified teachers because other information from the survey and our in-depth discussions with officials during our site visits indicated that the respondents could not accurately answer the question. For example, three of five officials who completed the survey but did not answer this question commented in the survey that they could not answer because they could not count the number of teachers. Additionally, one official who reported that 100 percent of the teachers were highly qualified and another who reported 94 percent, also commented that they were unable to count their teachers. During our site visits we learned that officials did not have know the criteria for some groups of teachers, did not have data systems to allow them to track teachers by class and therefore, could not accurately determine how many teachers were highly qualified. We also visited 8 states with a range of characteristics that might affect their meeting Title II requirement for highly qualified teachers. Those states were California, Connecticut, Illinois, Iowa, Maryland, North Carolina, Delaware, and Wyoming. We visited and interviewed officials in 2 districts in each state, one of which was a high-poverty district, and one school in each district. We interviewed Department of Education officials, and officials and representatives from several professional organizations. We also reviewed the legislation, the regulations, and guidance as well as related reports and other relevant documents. We conducted our work between July 2002 and May 2003 in accordance with generally accepted government auditing standards. Table 10 lists our summaries of the authorized activities on which states can spend Title II funds and shows the five categories we used to group them. Table 11 lists our summaries of the authorized activities on which districts can spend Title II funds and shows the two categories we used to group them. In addition to those named above, the following individuals made important contributions to this report: Susan Higgins, Anjali Tekchandani, David Garten, Joel Grossman, Richard Kelley, Mark Ramage, Minnette Richardson, Susan Bernstein, and Jeff Edmondson.
|
In December 2001, Congress passed the No Child Left Behind Act (NCLBA). The act required that all teachers of core subjects be highly qualified by the end of the 2005-06 school year and provided funding to help states and districts meet the requirement. In general, the act requires that teachers have a bachelor's degree, meet full state certification, and demonstrate subject area knowledge for every core subject they teach. This report focuses on the (1) number of teachers who met the highly qualified criteria during the 2002-03 school year, (2) conditions that hinder states' and districts' ability to meet the requirement, and (3) activities on which states and districts were planning to spend their Title II funds. GAO surveyed 50 states and the District of Columbia and a nationally representative sample of districts about their plans to implement the requirement. GAO also visited and interviewed officials in 8 states and 16 districts to discuss their efforts to implement the law. GAO could not develop reliable data on the number of highly qualified teachers because states did not have the information needed to determine whether all teachers met the criteria. Officials from 8 states visited said they did not have the information they needed to develop methods to evaluate current teachers' subject area knowledge and the criteria for some teachers were not issued until December 2002. Officials from 7 of 8 states visited said they did not have data systems that could track teacher qualifications for each core subject they teach. Both state and district officials cited many conditions in the GAO survey that hinder their ability to have all highly qualified teachers. State and district officials reported teacher pay issues, such as low salaries and lack of incentive pay, teacher shortages, and other issues as hindrances. GAO's survey estimates show that significantly more high-poverty than low-poverty districts reported hindrances, such as little support for new teachers. Rural district officials cited hindrances related to their size and isolated locations. State officials reported they needed assistance or information from Education, such as in developing incentives to teach in high-poverty schools, and Education's strategic plan addresses some of these needs. To help meet the requirement for highly qualified teachers, state survey respondents reported they planned to spend about 65 percent of their Title II funds on professional development activities authorized under Title II, and districts planned to spend an estimated 66 percent on recruitment and retention. Both state and district officials planned to spend much larger amounts of funds from sources other than Title II funds on such activities. High-poverty districts planned to spend more Title II funds on recruitment and retention than low-poverty districts. State and district officials visited said that most activities were a continuation of those begun previously.
|
Depot-level maintenance and repair of military weapons and equipment involve extensive shop facilities, specialized equipment, and highly skilled technical and engineering personnel. In recent years, the distinction between depot maintenance and lower levels of maintenance has become less pronounced. Public sector depot maintenance work is currently conducted in 22 major government-owned and government-operated maintenance depots and a number of other government-owned facilities, including post-production software support activities, laboratories, and Army arsenals. According to DOD officials, private sector depot maintenance work is conducted by commercial contractors at about 1,100 contractor-owned and -operated facilities at various geographic locations. The allocation of depot maintenance workload between the public and private sectors is governed by 10 U.S.C. 2466. According to the statute, at the time of our review, not more than 40 percent of funds made available to a military department or defense agency for depot-level maintenance and repair was to be used to contract for performance by nonfederal government personnel—also referred to as the 60/40 rule. The fiscal year 1998 Defense Authorization Act increased the percentage of depot-level maintenance and repair work that can be contracted to nonfederal government personnel to not more than 50 percent, from the previous 40-percent maximum. Other statutes that affect the extent to which depot-level workloads can be converted to private sector performance include 10 U.S.C. 2469, which provides that DOD-performed depot maintenance and repair workloads valued at not less than $3 million cannot be changed to contractor performance without a public-private competition, and 10 U.S.C. 2464, which, at the time of our review, provided that DOD activities should maintain a logistics capability sufficient to ensure technical competence and resources necessary for an effective and timely response to a national defense emergency. In April 1996, we testified before the Subcommittee on Readiness, Senate Committee on Armed Services, and the Subcommittee on Military Readiness, House Committee on National Security, on DOD’s revised depot maintenance policy and its report on public-private depot workload allocations. We noted that DOD’s policy clearly intended to shift additional workload to the private sector when readiness, sustainability, and technology risks can be overcome. In May 1996, we reported on DOD’s reported public-private depot workload allocations. We noted that with few exceptions, the 60/40 rule had not affected past public-private without repeal of the 60/40 rule, the military departments would not be able to follow through on large-scale plans to compete depot maintenance workloads between public and private sector activities; DOD’s report did not provide a complete, consistent, and accurate picture of depot maintenance workloads because it did not include (1) interim contractor support and contractor logistics support costs, (2) labor costs to install modification and conversion kits, and (3) software maintenance support, most of which was obtained from private sector firms using procurement funding; and DOD’s reported public sector workload allocation included costs for parts and services the public depots purchased from private sector contractors, some of which were costs for government-furnished material provided to private contractors. In our report, we suggested that Congress may wish to require that (1) all depot maintenance workload categories be included in future 60/40 reports, regardless of funding source, and (2) outlays by public depots for purchases of repair parts and services be included in the private sector’s workload share. For fiscal years 1994 through 1996, DOD was required by law to report to Congress on the public and private sector workload mix for each military department and agency. Although this requirement was not effective for fiscal year 1997, the Deputy Under Secretary of Defense (Logistics), in January 1997, as part of his oversight responsibilities to verify current and projected compliance with 60/40 statutory requirements, asked the military departments to quantify planned funding for depot maintenance workloads assigned to the public and private sectors for fiscal years 1996 through 2002. The military departments were requested to follow an approach similar to the one used in responding to previously mandated congressional reporting requirements. The military departments and the Defense Logistics Agency developed summary workload distribution reports for the Office of the Secretary of Defense (OSD) based on financial information contained in readily available budget data. In May 1997, OSD prepared a briefing for the Defense Depot Maintenance Council that showed the percentage of public and private sector depot maintenance workload distribution for each military department. Our review does not address subsequent changes in the law impacting public-private depot workload allocation requirements contained in the fiscal year 1998 Defense Authorization Act, approved November 18, 1997. This act contains an amendment to 10 U.S.C. 2466 mandating annual reports of public and private sector workload allocations. It also contains a new section 2460 of title 10, which specifies the kinds of work DOD is to include within the definition of depot-level maintenance. This will impact DOD’s future quantifications of public and private sector workload allocations. These changes address several workload reporting issues raised in this report. DOD’s first report to be submitted to Congress by February 1, 1998, is to include information on public and private sector depot-level maintenance spending for fiscal year 1997. DOD’s analysis of depot maintenance workload distribution showed that it provided funding of about $10.5 billion for depot maintenance requirements in fiscal year 1996, of which workload valued at $7.1 billion, or 68 percent, was assigned to public sector facilities and about $3.4 billion, or 32 percent, was assigned to the private sector. In addition, DOD’s data showed that it provided an additional $706 million for work acquired from the private sector through interim contractor support (ICS) and contractor logistics support (CLS) contracts. At the time of our review, the law did not specifically state whether such contractor-provided maintenance should be considered in 60/40 calculations. The recently passed provisions at 10 U.S.C. 2460 would establish a statutory definition of depot-level maintenance and repair. Among other things, it specifies that both ICS and CLS are to be included within the definition. As a result both ICS and CLS must be included in private sector workload calculations required under the newly amended provision of 10 U.S.C. 2466. According to the data in DOD’s analysis, Army and Navy depot maintenance funding provided to the private sector will not exceed 40 percent in any year from fiscal year 1996 to 2002, whether or not ICS and CLS are included. The percentage of Air Force depot maintenance funding provided to the private sector will vary considerably depending on the outcome of planned public and private sector workload competitions. In September 1997, the Air Force announced that Warner Robins Air Logistics Center won a public-private competition for the C-5 aircraft depot-level workload. Should the private sector win the remaining competitions, DOD data shows that its share will only exceed 40-percent when ICS and CLS costs are included. The percentage of public and private sector depot maintenance work reported by the military departments for fiscal years 1996 through 2002 and the potential impact of including ICS and CLS funding in the private sector workload distribution are shown in table 1. For the Air Force, the table provides percentage allocations for two scenarios. The first scenario assumes that public depots win all ongoing public-private workload competitions, and the second reflects that the public sector wins the C-5 workload and that the private sector wins all others. The Deputy Under Secretary’s January 1997 request for data on depot maintenance funding required that the military departments develop supplemental information for certain maintenance-related funding obtained through ICS contracts, CLS contracts, and other innovative logistics support arrangements. OSD program officials told us they asked the military departments to report these data separately because without collection of these data, DOD would have no vehicle for determining the impact on the public and private sector’s workload allocation. DOD does not have accurate data on public and private sector workload distributions. OSD’s January 1997 guidance to the military departments for identifying and reporting public-private depot maintenance activities, and workload distribution was vague and subject to interpretation. Consequently, the military departments used what OSD officials described as an ad hoc data collection process. As a result, workload distribution data reported by the services was inconsistent and incomplete. DOD directives, regulations, and publications provide a broad working definition for depot maintenance workloads, including the repairing, rebuilding, and major overhaul of major end items, parts, assemblies and subassemblies and limited manufacture of parts, technical support, modifications, testing, reclamation, and computer software maintenance. We found that the military departments’ efforts to accurately define and quantify their depot maintenance workloads were complicated by vague and conflicting supplemental OSD guidance. For example, the Deputy Under Secretary’s January 1997 request for public and private sector depot maintenance funding information states that 60/40 reporting (1) should consider all depot maintenance work, irrespective of funding source, and (2) should be based on only “maintenance and repair work,” while modification work was to be considered “non-maintenance work.” Our discussions with officials from the services and the defense agencies showed that officials responsible for public-private workload data collection and quantification differed on which defense activities and components should be reporting, and which types of workloads should be included. For example, service and defense agency officials stated that guidance is unclear if repair and maintenance funding for items not normally repaired in a traditional depot environment are to be included. These workloads include repair and maintenance funding of space systems, medical equipment, computer hardware, and classified programs. OSD has not established a uniform and consistent approach for collecting and quantifying current and planned public and private sector depot maintenance funding. As a result the military departments adopted an ad hoc data collection process, relying on what they considered to be the best available information and their interpretation of DOD reporting guidance. Consequently, we found the data reported by the services and agencies to be inaccurate, inconsistent, and incomplete. Our review of pertinent DOD and military department regulations and directives indicates that teardown, overhaul, and repair work accomplished by public and private sector activities concurrent with modification, conversion, and upgrade programs is included under DOD’s broadly defined list of depot maintenance workload categories. However, some military department officials responsible for workload allocation data collection told us that OSD had advised them to include such work in quantifying the portion of depot funds provided to the public sector but suggested excluding similar funding from the private sector workload quantification. The military departments spend over a billion dollars annually to install modification kits to upgrade and modernize existing weapon systems in the private sector. Due to the ad hoc nature of the data collection process, each of the military departments treated modification, conversion, and upgrade projects differently. For example, the Air Force and the Naval Air Systems Command included funding for installing modification kits provided through procurement appropriations in their quantifications of both the public and private sector workload allocations, while the Naval Sea Systems Command for fleet modernization programs included such funding in public sector expenditures but excluded them for programs performed by private sector activities. The Army excluded such funding for modification programs accomplished by both public and private sector activities. Adding the funding for teardown, overhaul, repair, and installation of modification and conversion kits in the Army’s workload mix calculations could show as much as 60 percent of the available fiscal year 1996 depot maintenance funding going to the private sector, rather than the 32 percent the Army reported. For example, audit work being conducted by the Army Audit Agency shows that funding for installing modification and upgrade hardware on two major modification efforts—the M1 Abrams upgrade and Apache Longbow conversion programs—could total more than $700 million in fiscal year 1996, with most work being done by private sector activities. Army program officials told us they excluded funding for teardown, overhaul, and repair work involved with modification and conversion programs from 60/40 reporting because (1) they interpreted the 60/40 statute to only address work funded by the operations and maintenance appropriation and (2) OSD data collection guidelines specifically state that modification and conversion work was to be considered “non-maintenance” work for purposes of 60/40 reporting. In commenting on a draft of this report, Army officials stated that procurement appropriation funded modification and conversion work was not considered for 60/40 reporting because the data was not readily available. However, they indicated that this reporting deficiency would be corrected for future quantifications of public and private sector workload data. DOD directives and regulations specify that depot maintenance includes all aspects of software maintenance; however, DOD has not clearly defined the kinds of software maintenance work that should be quantified and considered in its 60/40 reports. Our work shows that the military departments reported some software maintenance support funding when they were readily identifiable but excluded others. The value of the excluded software maintenance workloads could exceed $1 billion. For example: The Air Force’s analysis included software maintenance support funding for workloads funded by the Air Force Materiel Command but excluded most software maintenance funding for the Centralized Integration Support Facility in Colorado Springs, Colorado, a facility funded by the Air Force Space Command. Air Force officials acknowledged that funds for this activity should have been included in its workload analysis and stated that this deficiency will be corrected in future reports. The Navy’s analysis included software maintenance support costs for workloads funded through traditional depot facilities, including naval aviation depots, naval shipyards, or Marine Corps logistics centers, using operational and maintenance appropriation funding. However, software support obtained with procurement funding was not included. For example, the Marine Corps did not report funding for software maintenance work performed by an approximate 300-person support center located at Camp Pendleton, California, which reports to the Marine Corps Systems Command—an organization not traditionally recognized as being a provider of depot maintenance support. We also found that when software work was included in the 60/40 report, the public sector workload quantification included funding for work being accomplished by private sector personnel assigned to work on government-owned and -operated installations. For example, our work showed that the Army’s analysis included $37.7 million under the public sector for software support workloads assigned to the Communications and Electronics Command, Fort Monmouth, New Jersey; the Tank Automotive Command, Warren, Michigan; and the Aviation and Missile Command, Redstone Arsenal, Alabama; even though 734 of the 1,150 software specialists at these locations are private sector employees. In addition the Army’s analysis included $97.4 million under the private sector share for software maintenance workloads assigned directly to private sector contractors. In discussing a draft of this report, DOD stated that only depot-level software maintenance was to be included in public-private workload allocation reports. As stated previously, the distinction between the depot-level and lower levels of maintenance has become less pronounced. Subsequent to DOD’s response, in December 1997 the Deputy Under Secretary of Defense (Logistics) issued a memorandum to more clearly define depot-level software maintenance. The U.S. Transportation Command, which has principal components including the Military Traffic Management Command, the Air Mobility Command, and the Military Sealift Command, was not specifically tasked by OSD to develop public and private sector depot maintenance funding information. We found that the Air Force’s analysis of fiscal year 1996 depot maintenance funding included $295 million for support of the Air Mobility Command aircraft. Military Sealift Command officials told us they provided funding of about $83.5 million in fiscal year 1996 for maintenance-related activities, but the portion of the funding attributable to depot-level maintenance was not included in the Navy’s workload allocation analysis. Military Sealift Command officials told us they could not readily determine the public and private sector distribution of maintenance workloads but indicated that most services were obtained from private contractors. Our work also shows that a substantial portion of the funds provided to public depots is ultimately contracted out to the private sector for parts, materials, and labor. OSD’s guidance mentions that public sector depots typically obtain support directly from the private sector for items such as raw materials, replacement parts, and personnel services, but it provides no direction as to how these items should be treated in computing the public and private sector workload mix. This results in inconsistent reporting that overstates the public sector share and understates the private sector share. For example, parts purchased from the private sector and furnished to private sector contractors as government-furnished material are sometimes counted as a public sector cost. An Army official told us that about 40 percent of the total fiscal year 1996 funding provided to the Army’s five public sector depots will be used to purchase materials, supplies, and services from private sector contractors. In addition, the Army’s major depots currently have 181 contractor-employed artisans, working with government employees, and these costs were included for reporting purposes as public sector funding. In commenting on a draft of this report, DOD officials noted that there is nothing in the legislative history indicating how parts, material, and labor costs are to be counted for the 60/40 requirement. The 60/40 statute applies to the military departments and defense agencies receiving depot maintenance funds; however, OSD only asked the three military departments and the Defense Logistics Agency to report on its current and planned public and private sector depot maintenance funding.Our limited review showed that several defense agencies received funding for depot-level maintenance and that some received a substantial amount of their depot maintenance support from private sector contractors. For example: Officials from the National Security Agency told us they received funding of about $15 million per year to maintain equipment and about $83 million per year to maintain computer software. The agency employs a full-time staff of federal and nonfederal employees to repair equipment on-site and if equipment can not be fixed in-house it is discarded. Officials also told us the agency employs a substantial number of computer software specialists, who develop and maintain computer software programs. Officials from the National Imagery and Mapping Agency told us their depot maintenance budgets for fiscal years 1996 and 1997 averaged about $85 million for each year, of which about 90 percent was attributable to private sector support. Officials from the Defense Intelligence Agency did not report any depot-level maintenance. It reported that the Air Force is the executive agent and maintains the Imagery Exploitation Support System, which involves complex computer programs. However, neither the Air Force nor the Defense Intelligence Agency reported the funding for maintaining this software. In discussing a draft of this report, DOD stated that some of the previously described maintenance funding for the defense agencies may have been for other than depot-level maintenance support. However, they stated that they would clarify these uncertainties before the next reporting cycle. Subsequently, in December 1997 the Deputy Under Secretary of Defense (Logistics) asked each of the aforementioned defense agencies to provide public and private sector depot maintenance spending data for the fiscal year 1997 reporting cycle. The Navy collected planned depot maintenance funding for the newly privatized Louisville depot, but due to the ad hoc nature of the data collection process, funding totaling $70 million for fiscal years 1997, 1998, and 1999 was excluded from the Navy’s quantification of private sector depot maintenance data. Navy officials told us the funding projections for Louisville were not readily available when the current 60/40 report was developed for OSD. They stated that future reporting will include funding information for the Louisville facility under the private sector share. DOD’s current approach for collecting information on the allocation of depot maintenance workload between the public and private sectors results in incomplete and inconsistent reporting. This is because the guidance provided to the military departments is imprecise, leaving room for varying interpretations on the data to be reported. Further, it appears that a number of defense components that perform depot maintenance were not included in the data collection effort. Given these conditions, DOD, while reporting that about 68 percent of its depot maintenance is performed by the public sector, does not have complete, consistent, and accurate information on this public/private sector workload distribution. If DOD’s analysis of its compliance with 10 U.S.C. 2466 is to be meaningful, improvements are needed in the data collection and reporting process. To improve the accuracy of reporting on the amount of funding for depot maintenance in the public and private sectors, we recommend that the Secretary of Defense develop a standardized methodology for annually collecting depot maintenance funding data for the public and private sectors. This should include (1) a specific definition of the types of activities to be reported, (2) the defense components that should be reporting, and (3) specific data collection processes and procedures the military departments are to follow to insure complete, accurate, and consistent reporting of the amount of funding provided for public and private sector depot maintenance workloads. DOD officials commented on a draft of this report. They concurred with our findings and recommendations. We made technical corrections in several areas to address their comments. DOD’s response is included in appendix I. Subsequent to DOD’s response, the Deputy Under Secretary of Defense (Logistics), in a memorandum dated December 5, 1997, established an annual process for reporting public and private sector maintenance costs as required by the fiscal year 1998 Defense Authorization Act, which amended 10 U.S.C. 2466. The Secretary’s memorandum also provided new guidance to more clearly define the types of workloads that are to be included in future workload allocation reports, and the defense components that should be reporting. This should lead to more accurate and consistent reporting of public and private sector workload allocations. We reviewed OSD’s analysis of public and private sector depot maintenance workload distribution and accompanying reports prepared by the Army, the Navy, the Marine Corps, the Air Force, and the Defense Logistics Agency. We also reviewed pertinent DOD, OSD, and military department directives, regulations, and publications to determine how DOD, OSD, and the military departments define depot maintenance work. We drew extensively from our prior work concerning the public and private sector workload mix. We also reviewed preliminary results of ongoing audit work being conducted by the U.S. Army Audit Agency and a study of depot maintenance software activities being conducted by the Logistics Management Institute. From each of the military departments and OSD, we obtained and reviewed pertinent correspondence and back-up documentation supporting OSD’s public and private sector workload report. Back-up documentation included budget exhibits, computerized worksheets, and summary reports. We did not independently assess the accuracy of the data contained in back-up documentation. We interviewed officials and examined documents at OSD, Army, Navy, Marine Corps, and Air Force headquarters, Washington, D.C.; the Army Materiel Command, Alexandria, Virginia; the Air Force Materiel Command, Dayton, Ohio; the Naval Sea and Air Systems Commands, Arlington, Virginia; and the National Security Agency, Fort Meade, Maryland. To determine if defense agencies and organizations received depot maintenance funds that were not included in OSD’s analysis of public and private sector depot maintenance workload distribution, we selected several defense agencies and nontraditional depot maintenance commands. At the selected agencies and commands, we interviewed officials to determine the extent of maintenance funding received and the distribution between the public and private sectors. These activities included the U.S. Transportation Command, the Military Traffic Management Command, the Air Mobility Command, the Military Sealift Command, the National Security Agency, the Defense Intelligence Agency, and the National Imagery and Mapping Agency. We conducted our review from May to August 1997, and except where noted, in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Directors of the Defense Logistics Agency, National Security Agency, Defense Special Weapons Agency, National Imagery and Mapping Agency, and Defense Intelligence Agency; and the Commander, U. S. Transportation Command. Copies will be made available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Julia Denman, Glenn Knoepfle, and David Epstein from the National Security and International Affairs Division and John Brosnan from the Office of General Counsel. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) guidelines and procedures for identifying depot maintenance workloads and quantifying the public- and private-sector share of depot maintenance funding, focusing on: (1) public and private workload distributions as reported by the military departments and defense agencies for fiscal years (FY) 1996 through 2002; and (2) the procedures DOD uses to define and quantify depot workload distribution. GAO noted that: (1) DOD's May 1997 report of public- and private-sector depot maintenance workload distribution for FY 1996 through 2002 did not provide a complete, consistent, and accurate assessment of DOD's public- and private-sector funding; (2) vague Office of the Secretary of Defense guidance and incomplete and inconsistent reporting of data by the military departments and defense agencies contributed to this condition; (3) the workload distribution analysis showed that in FY 1996 DOD spent $7.1 billion for work assigned to public-sector facilities and about $3.4 billion for work assigned to the private sector; (4) in addition, DOD's analysis shows that DOD provided an additional $706 million for depot maintenance-related work acquired from the private sector through interim contractor support and contractor logistics support arrangements; and (5) DOD's depot maintenance workload distribution and supporting data show that: (a) in some cases modification and conversion work obtained from private-sector contractors was not reported but similar work in public depots was included; (b) reporting of computer software maintenance work was inconsistent and perhaps incomplete; (c) public-sector depot maintenance funding included substantial expenditures for goods and services purchased from private-sector contractors, and resulted in inconsistent reporting of the allocation between the public and private sector; and (d) depot maintenance expenditures for equipment and software owned by various defense agencies were not reported.
|
Beginning with the National Defense Authorization Act for Fiscal Year 2006, Congress required the Secretary of Defense to issue guidelines requiring DOD to consider using federal employees to perform work that was currently being performed or would otherwise be performed under DOD contracts. Under the guidelines, special consideration was given to contracts that had been performed by federal government employees on or after October 1, 1980, were associated with the performance of inherently governmental functions, had not been awarded on a competitive basis, or were determined to be poorly performed due to excessive costs or inferior quality. The National Defense Authorization Act for Fiscal Year 2008 codified at section 2463 of title 10 of the United States Code (U.S. Code) revised the guidelines and procedures for use of civilian employees to perform DOD functions. This section directed the Under Secretary of Defense for Personnel and Readiness (P&R) to devise and implement guidelines and procedures to ensure that consideration was given to using, on a regular basis, DOD civilian employees to perform new functions. In addition, the guidelines and procedures were to ensure that functions that were performed by contractors and could be performed by DOD civilian employees were given the same consideration. Congress also directed that the guidelines and procedures may not include any specific limitation or restriction on the number of functions or activities that may be converted to performance by DOD civilian employees. The act further provided that DOD may not conduct a public-private competition prior to in-sourcing such functions. The act also added a new section describing the functions that were to receive special consideration from DOD when considering the use of DOD civilian employees. Additionally, the act required special consideration be given to a new requirement that is similar to a function previously performed by DOD civilian employees or is a function closely associated with the performance of an inherently governmental function. Pub. L. No. 110-181, § 807 (2008). performance of the activity. The National Defense Authorization Act for Fiscal Year 2011 again amended section 2330a of title 10 of the U.S. Code. Among other things the act now requires DOD to report the number of contractor employees, expressed as full-time equivalents for direct labor, using direct labor hours and associated cost data collected from contractors (except that estimates may be used where such data is not available and cannot reasonably be made available in a timely manner for the purpose of the inventory). Section 2330a (e) of title 10 of the U.S. Code requires each Secretary of a military department or head of a defense agency to review this annual inventory for several purposes, one of which is to identify activities that should be considered for conversion to performance by DOD civilian employees pursuant to section 2463 of title 10 of the U.S. Code. In turn section 2463 requires the Secretary of Defense to make use of the 2330a inventory for the purpose of identifying functions that should be considered for performance by DOD civilian employees. Under DOD’s policy for determining the appropriate mix of military and DOD civilians and contractor support, risk mitigation shall take precedence over cost savings when necessary to maintain appropriate control of government operations and missions. This policy provides manpower mix criteria for assessing which functions warrant performance by military or civilian personnel due to their associated risks, and which functions will therefore be considered exempt from performance by contractor support. DOD issued in-sourcing guidance in April 2008 and again in May 2009 to assist components in implementing these legislative According to the May 2009 guidance, DOD components requirements.should first confirm that a particular mission requirement is still valid and enduring; that is, that DOD will have a continued need for the service being performed. If the requirement is still valid, the component should consider in-sourcing the function. If the component determined that the function under review was inherently governmental or exempt from private sector performance no cost analysis was required. Possible rationales to in-source include the following, according to the May 2009 in-sourcing guidance: The function is inherently governmental; that is, the function is so closely related to the public interest as to require performance by government employees. The function is exempt from private sector performance to support the readiness or workforce management needs of DOD. According to DOD’s policy for determining the appropriate mix of military, DOD civilians, and contractor support, a function could be exempt from private sector performance for a variety of reasons, including functions exempt for career progression reasons, continuity of infrastructure operations, and mitigation of operational risk. The contract is for unauthorized personal services. Special authorization is required for DOD to engage in personal services contracts, which create a direct employer/employee relationship between the government and the contractor’s personnel. There are problems with contract administration due to a lack of sufficiently trained and experienced officials available to manage and oversee the contract. Other than in-sourcing, OUSD (P&R) officials told us that DOD may be able to address the above circumstances by, among other approaches, restructuring the contract or changing the way the contract is overseen. DOD’s guidance does not require components to prepare cost estimates when they cite one of the above reasons as the basis for their in-sourcing decision. In situations in which none of the factors cited above are applicable, DOD’s guidance instructs components to provide “special consideration” as discussed above, and if DOD civilians could perform the work, conduct a cost analysis to determine whether DOD civilians were the lowest-cost provider. According to a December 2009 in-sourcing plan submitted to Congress, DOD based this requirement on section 129a of title 10 of the U.S. Code, which requires DOD to determine the least costly personnel consistent with military requirements and other needs of the department. Thus, DOD components may also in-source for cost reasons when the work could otherwise be performed by a private contractor. DOD stated in its fiscal year 2010 budget submission to Congress that it expected to save $900 million in fiscal year 2010 from in-sourcing. To support the in-sourcing initiative, in April 2009 the Office of the Under Secretary of Defense (Comptroller) issued a budget decision which decreased funding for support service contracts and increased funding for new civilian authorizations across DOD components. In December 2009, DOD issued a report to Congress on its planned fiscal year 2010 in-sourcing efforts, stating that after component reviews, the department planned to create as many as 17,000 new civilian authorizations as a result of in-sourcing in fiscal year 2010. In August 2010, the Secretary of Defense stated that he was not satisfied with the department’s progress in reducing over-reliance on contractors. Representatives of OUSD (P&R) and the Office of the Under Secretary of Defense (Comptroller) told us that although DOD avoided $900 million in costs for contracted support services in fiscal year 2010 due to the budget decision to reduce funds associated with in-sourcing, total spending across all categories of service contracts increased in fiscal year 2010 by about $4.1 billion. To accelerate the process and achieve additional savings, the Secretary directed a 3-year reduction in funding for service support contracts categorized by DOD as contracted support services. He also directed a 3-year freeze on the level of DOD civilian authorizations at OSD, the defense agencies, and the Combatant Commands, and stated that with regard to in-sourcing, no more DOD civilian authorizations would be created after the then-current fiscal year to replace contractors. He also noted that some exceptions could be made for critical areas such as the acquisition workforce. Further, the statutory requirement to regularly consider in-sourcing contracted services remains in effect, and DOD officials told us that, accordingly, in-sourcing continues in the department, though on a more limited basis. See figure 1 for a timeline of key events related to DOD in-sourcing. Additionally, section 115b of title 10 of the U.S. Code requires DOD to annually submit to the defense committees a strategic workforce plan to shape and improve its civilian workforce. Among other requirements, the plan is to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. OUSD (P&R) is responsible for developing and implementing the strategic plan in consultation with the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. Since 2001, we have listed federal human capital management, of which strategic workforce planning is a key part, as a governmentwide high-risk area. Similarly, we have identified challenges with having a sufficient number of adequately trained acquisition and contract oversight personnel as a factor in continuing to identify DOD contract management as a DOD-specific high-risk area. DOD’s September 2011 in-sourcing report addressed the legislative requirements to report the service or agency involved with each of its fiscal year 2010 in-sourcing actions and the rationale for each action, but did not report the number of contractor employees whose functions were in-sourced, as specified in the act. DOD stated that it could not report the number of contractor employees because it contracts for services, rather than hiring contractor employees directly. An OUSD (P&R) official noted that one of the data elements Congress has required DOD to include in its annual inventories of contracted services is the number of contractor employees, expressed as full-time equivalents, that performed each activity, and DOD is in the process of implementing a revised approach to collect these data directly from contractors. DOD’s report identified nearly 17,000 newly created civilian authorizations as a result of in-sourcing actions in fiscal year 2010, and for each of these new authorizations, the department identified the DOD component involved with the decision. For example, DOD reported that 42 percent of the new authorizations were established in the Army; 28 percent in the Air Force; 16 percent in the Department of the Navy (including the Marine Corps); and 14 percent in other DOD agencies. The report also in many cases identified the major command, suborganization, or directorate of each DOD component that made the in- sourcing decision. For example, the Air Force identified whether Air Combat Command, U.S. Air Forces Europe, or another agency within the Air Force made the decision. See figure 2 for the overall distribution of DOD’s fiscal year 2010 in-sourcing actions across its components. The report also provided information on the rationale for each in-sourcing action across DOD. According to DOD, half of the actions were based on a determination that the function would be more cost effective if performed by DOD civilian employees. While section 323 of the FY11 NDAA did not require DOD to report cost data on in-sourcing, DOD issued guidance in January 2010 on cost estimating methodology for cost-based in-sourcing decisions and the military departments collected and reported some cost estimate data to OUSD (P&R). See appendix II for information on DOD’s guidance on estimating in-sourcing costs and collection of cost estimate data. Additionally, DOD indicated in its September 2011 in-sourcing report to Congress that about 41 percent of the new authorizations would perform functions DOD determined to be exempt from private sector performance, such as those necessary for career progression reasons, continuity of infrastructure operations, or risk mitigation (which included oversight and control of functions that are closely associated with inherently governmental functions). Lastly, DOD reported that about 9 percent of the new authorizations were created to perform work that was determined to be inherently governmental (see fig. 3). Our analysis of the data contained in the DOD in-sourcing report showed that the military services differed in the rationales they cited as the basis for their in-sourcing actions. For example, 86 percent of the Army’s new authorizations (5,969 of 6,953) resulting from in-sourcing were deemed exempt from private sector performance in order to reduce the risks associated with contractors performing particular functions that were closely associated with inherently governmental functions. In contrast, 95 percent of the Air Force’s new in-sourcing authorizations (4,495 of 4,732) were cost-based and 100 percent of the new Marine Corps authorizations (all 1,042) were cost-based. While the Navy reported that the basis for its fiscal year 2010 in-sourcing actions varied (26 percent or 441 cost-based, 31 percent or 529 inherently governmental, and 43 percent or 716 exempt from private sector performance, out of a total of 1,686), each of the Navy’s largest major commands by volume of in-sourcing actions tended to vary as well, with each command citing primarily one basis for its in- sourcing actions which differed among commands. For example, Naval Sea Systems Command reported that its in-sourcing actions largely involved functions considered exempt from private sector performance for career progression reasons, while Pacific Fleet Command in-sourced 223 out of 224 positions for cost reasons. See figure 4 for the distribution of the reasons cited for in-sourcing for each military service. OUSD (P&R) and military service officials told us these differences reflect the specific missions and functions of commands and differences in the emphases of military services in the in-sourcing process. For example, Army officials told us that the Army chose to in-source a large number of functions which were closely associated with inherently governmental functions to reduce risk associated with having contractors perform that work. By contrast, Air Force officials told us that they gave “special consideration” to in-sourcing functions closely associated with inherently governmental, however, because the Air Force had sufficient in-house capability in place to oversee the contracted work and could continue contracting for those functions, the Air Force performed costs estimates and in-sourced these functions for cost reasons. Under DOD’s implementation of section 2463 of title 10 of the U.S. Code, even though a function is identified as closely associated with inherently governmental, unless that function meets DOD’s exempt criteria, the function may only be in-sourced if a cost savings will result. Furthermore, our work found that differences in the reasons cited for the in-sourcing actions were, in part, due to actions by the military services and major commands to focus their efforts on different objectives when identifying contracts for possible in-sourcing. For example, Air Force and Marine Corps command officials we met with indicated that their objective was to realize cost savings from in-sourcing in order to live within the budget reductions associated with the DOD Comptroller’s April 2009 budget decision, which cut funds from contracted services and placed a portion of those funds in civilian authorizations accounts. By contrast, officials of Naval Sea Systems Command told us they pursued an in- sourcing process based on an analysis the command had performed of weaknesses in its internal capabilities and over-reliance on contactors, and this resulted in categorizing the command’s in-sourcing actions as exempt from private sector performance for career progression reasons. Similarly, at one Army command, officials we met with in-sourced mainly due to a statutory requirement that security guards on military bases be government civilians. DOD’s in-sourcing report further noted that in-sourcing has been an effective tool for the department to rebalance its workforce, realign inherently governmental and other critical work to government performance, and in many cases, generate resource efficiencies for higher priority goals. DOD’s in-sourcing report did not provide the number of contractor employees whose functions were in-sourced as required, stating that the department did not report this information because the department does not directly employ or hire individual contractor employees. DOD further stated that the department contracts for services to be performed, so the number of employees used to perform these services is not a decision of the department but is at the discretion of the contractor. The report also stated that the department’s in-sourcing actions are focused on services and not individual contractor positions or employees. OUSD (P&R) officials told us that DOD focuses on contracting for services rather than the number of contractor employees providing these services. OUSD (P&R) officials further noted that the department does not currently have complete information on the number of full-time equivalents of contractor employees providing services to the department. We recognize that the manner in which the service will be performed under the contract is often a decision of the contractor. However, the level of contractor personnel required to perform each activity is a key component of total workforce management. As previously noted, section 2330a of title 10 of the U.S. Code requires DOD to submit to Congress an annual inventory of all activities performed pursuant to contracts for services and data associated with each activity to include the number of contractor employees, expressed as full-time equivalents, based on the number of direct labor hours and associated cost data collected from contractors, paid for performance of the contracted services. Our prior work has found that DOD faces limitations in obtaining or estimating this information. For example, we found that the federal government’s primary data system for tracking information on contracting actions does not provide all the data elements required for the inventory of contracted services. Though DOD has submitted four annual inventories to Congress, as noted in our prior work, with the exception of the Army’s inventory data, the information in the DOD inventories is largely derived from databases that do not collect the information required by section 2330a of title 10 of the U.S. Code. In its September 2011 in-sourcing report to Congress, DOD noted that ongoing efforts to collect the information required by section 2330a may in the future help inform the number of contractor full-time equivalents in-sourced. In November 2011 DOD submitted to Congress a plan to collect personnel data directly from contractors. According to this plan, DOD will institute a phased-in approach to do so by fiscal year 2016. To produce the report on fiscal year 2010 in-sourcing actions, OUSD (P&R) requested that DOD components provide certain information about their fiscal year 2010 in-sourcing actions, and DOD and the military departments took varying, and in some instances limited, approaches to ensuring the reliability of the reported data. For example, the Air Force required major commands to certify the accuracy of the data they reported to Air Force headquarters, while the Navy also delegated responsibility for ensuring data reliability to its major commands but did not establish a policy requiring data certifications. GAO’s Standards for Internal Control in the Federal Government states that internal controls, which include verifications and edit checks, help provide management with reasonable assurance that agencies have achieved their objectives, including compliance with applicable laws and regulations and the reliability of financial and other internal and external reports. To obtain data for the report, OUSD (P&R) sent a reporting template to DOD components which requested the following information: major command/suborganization/directorate, location, in-sourcing rationale, estimated annual savings, DOD function code, occupational series, whether the position was filled, whether it was part of the defense acquisition workforce, and whether the action had a small business impact. the name of the component, OUSD (P&R) included a subset of this information in the September 2011 in-sourcing report to Congress, including the component, major command/suborganization/directorate, location, rationale, and function code. To provide the data, both the Air Force and the Department of the Navy obtained data from their respective major commands, while the Army compiled its in-sourcing data at the headquarters level using several data sources originally populated by major commands. The major commands we met with in the Air Force and the Department of the Navy—like the Army headquarters—used various information systems and other sources in compiling their in-sourcing data, since no one data source could provide all the information required. These data sources included personnel databases such as the Defense Civilian Personnel Data System as well as service-specific personnel systems, and the results of reviews of contracts and inventories of contracted services, among other sources. The Air Force required major commands to certify the accuracy of the data they reported to Air Force headquarters on each in-sourcing action. More specifically, the guidance required reviews and certifications by key personnel—including reviews by personnel, contracting, finance, and manpower officials. The guidance included a worksheet which required certifications of all the data contained in the business case analyses which were required for each in-sourcing action. Air Force officials told us that the data contained in the business case analyses were used by major commands to generate the reports on in-sourcing actions submitted by the major commands to Air Force headquarters. The Department of the Navy also delegated responsibility for ensuring data reliability to its major commands, though it did not establish a certification requirement or issue other guidance to help ensure the reliability of the in-sourcing data it collected and reported to OUSD (P&R) for the in-sourcing report to Congress. Army headquarters officials told us that they had established a general level of accuracy in the in-sourcing data by cross-checking three databases in order to produce the data reported to OUSD (P&R), and by sending the personnel data to major commands to cross-check with reviews of contracted services. However, Army headquarters officials told us only a limited number of commands responded to this data request in time to include their checks in the submission to OUSD (P&R). Army officials told us the department did not establish a formal mechanism or issue guidance to ensure the reliability of the in-sourcing data it reported to OUSD (P&R), but Army headquarters officials told us that although the in-sourcing data they reported was not of auditable accuracy, it generally reflected commands’ in-sourcing actions. At the OSD level, OUSD (P&R) officials told us that due to time and resource constraints, they did not verify or validate the in-sourcing data they collected beyond checking for obvious errors such as omissions, and performing cross-checks with data from the department’s inventory of inherently governmental and commercial activities. Where disconnects were identified, an OUSD (P&R) official told us they went back to the DOD components for correction of inconsistencies. However, the official told us that there is no mechanism at the OSD level to verify the accuracy of components’ data, and that this limitation on data verification exists for all activities in the department, not just in-sourcing. OUSD (P&R) officials told us that DOD intentionally pursued a decentralized in-sourcing process to reduce bureaucratic procedures that would have limited commands’ abilities to make timely in-sourcing decisions. Our work identified either an inaccuracy in the information reported to OUSD (P&R) for the in-sourcing report or concerns about the accuracy of the data included in the report to Congress at four of the nine major commands we met with, as the following examples illustrate: The Navy’s Fleet Forces Command acknowledged that while they reported establishing 348 authorizations (out of a total of 354 fiscal year 2010 in-sourcing authorizations) to perform information technology functions that were inherently governmental, these authorizations should have been categorized as exempt from private sector performance for continuity of infrastructure operations. Similarly, Space and Naval Warfare Systems Command officials told us that 130 of their reported 131 total fiscal year 2010 in-sourcing authorizations that were identified as inherently governmental were actually in-sourced for career progression reasons. Army Medical Command officials told us they did not believe that the data submitted by the Army for DOD’s in-sourcing report accurately indicated the correct number of new authorizations as a result of in- sourcing by Army Medical Command in fiscal year 2010. Command officials told us that because command staff did not have a consistent understanding of when a new authorization fit the definition of in- sourcing, in some cases new authorizations were coded as in- sourcing when they should not have been, and in other cases new in- sourcing authorizations were not coded as such. The officials said that as a result, the data Army headquarters drew on to compile the in- sourcing data contained both under- and over-reporting of in-sourcing actions. Nevertheless, they said they believed the data, though not precisely accurate, reflected the scale of in-sourcing activity at the command in fiscal year 2010. The need for accurate data is not unique to in-sourcing decisions. GAO’s Standards for Internal Control in the Federal Government states that internal controls, which include verifications and edit checks, help provide management with reasonable assurance that agencies have achieved their objectives, including compliance with applicable laws and regulations and the reliability of financial and other internal and external reports. Without access to accurate data, decision makers in DOD and Congress may not have reliable information to help manage and oversee DOD’s in-sourcing actions. While section 323 of the FY11 NDAA did not require the in-sourcing report to address whether DOD’s fiscal year 2010 in-sourcing actions aligned with the department’s strategic workforce plans, DOD officials told us that the department had taken some initial steps to align these efforts. Further, DOD officials indicated that DOD’s fiscal year 2010 in-sourcing efforts were generally consistent with the department’s strategic workforce objectives. DOD’s in-sourcing implementation guidance required components to identify contracted services for possible in- sourcing as part of a total force approach to strategic human capital planning, and we and the Office of Personnel Management have identified aligning an organization’s human capital program with its current and emerging mission and programmatic goals as a critical need of strategic workforce planning.data used in the in-sourcing report and workforce plans hinder an accurate assessment of the degree to which DOD’s use of in-sourcing achieved the department’s strategic workforce objectives. However, differences in the types of With respect to the steps DOD took to align in-sourcing with its strategic workforce plans, the department identified a goal for the in-sourcing initiative in its March 2010 civilian strategic workforce plan. The plan stated that the goal was to optimize the department’s workforce mix to maintain readiness and operational capability, ensure inherently governmental positions were performed by government employees, and construct the workforce in an effective, cost efficient manner. In addition, OUSD (P&R) officials noted that they had convened an in-sourcing “community of interest” in 2009 to prepare DOD’s functional communities for the fiscal year 2010 in-sourcing efforts, and briefed DOD component functional community managers on the in-sourcing process. OUSD (P&R) officials responsible for strategic workforce planning and the report on fiscal year 2010 in-sourcing actions told us, however, that they had not established metrics to measure progress toward the stated goal of the in- sourcing effort, and acknowledged that it would be difficult to measure such progress from the available data. Further, DOD officials indicated that because DOD uses different identifiers for workforce planning efforts than it does to track in-sourcing actions, DOD does not have the ability to correlate the underlying data. For example, DOD’s most recent strategic workforce plans used occupational series codes—representing occupations such as budget analyst (0560) or civil engineer (0810)—while the in-sourcing report used function codes, which describe a broad area of work such as logistics or intelligence.occupational series and function codes, and one occupational series can be found in many different function codes—for example, a budget analyst could work in logistics or professional military education, among other DOD officials told us there is no crosswalk between functions. Though they were not published in the report to Congress, the data military departments reported to OUSD (P&R) included occupational series, but those data are limited in the extent to which they can be used to measure progress against the strategic workforce plans. For example, the non-acquisition workforce plans did not contain specific workforce targets for in-sourcing. Similarly, the acquisition workforce plan did not contain workforce targets by occupational series, but instead outlined targets for increasing acquisition career fields, which consist of many, overlapping occupational series. For example, four different career fields—including the “test and evaluation” and “production, quality & manufacturing” career fields—contain the general engineer (0801) occupation. Thus, the data components provided to OUSD (P&R) for the in-sourcing report also could not be used to compare with the in-sourcing targets contained in the acquisition community workforce plan. DOD officials stated that they believe the department’s fiscal year 2010 in-sourcing actions were consistent with the broad goals outlined in their 2010 workforce plans, and had the effect of freeing up funds for higher- priority areas because of cost efficiencies, and of reducing risks associated with contractors performing inherently governmental or closely associated with inherently governmental functions. However, without greater alignment between the in-sourcing data and strategic workforce plans, decision makers in DOD and Congress have limited information about the extent to which in-sourcing actions furthered the department’s strategic workforce goals. In-sourcing is one tool DOD can use to balance its workforce mix among DOD civilians, military personnel, and contractors to help ensure it has the right balance of in-house capabilities to perform its mission and reduce the risk of over-reliance on its contractor workforce. DOD stated in its September 2011 report to Congress that its fiscal year 2010 in- sourcing decisions helped the department achieve these objectives. DOD reported on its fiscal year 2010 in-sourcing actions as Congress required, listing the creation of nearly 17,000 new civilian authorizations as a result of in-sourcing by DOD components. The report also listed the DOD component taking the in-sourcing action and the basis and rationale for each action. However, DOD and the military departments took only limited steps to ensure that the report data, such as the number of new in- sourcing authorizations in each command and the stated rationale for the actions, were reliable. In some instances, we found the data submitted by the major commands to be inaccurate due to insufficient mechanisms for validating the reliability of the data. Without greater assurance of data reliability, the report itself, as well as any data DOD may continue to collect on its ongoing in-sourcing actions in the future, may have limited utility as a tool to facilitate oversight by decision makers in both DOD and Congress. Likewise, the data collected on in-sourcing could not be used to measure progress toward the department’s overall goal for its in- sourcing initiative according to its strategic workforce plans. The lack of alignment between strategic-level workforce plans and the fiscal year 2010 in-sourcing data and the lack of metrics to measure progress against strategic workforce objectives limits decision makers’ insight into the extent to which in-sourcing in fiscal year 2010 strengthened the DOD workforce in key areas. To address these issues, we recommend that the Secretary of Defense take the following two actions: To enhance insights into and facilitate oversight of the department’s in- sourcing efforts, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to issue guidance to DOD components requiring that the components establish a process to help ensure the accuracy of any data collected on future in- sourcing decisions. To improve DOD’s strategic workforce planning, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to better align the data collected on in-sourcing with the department’s strategic workforce plans and establish metrics with which to measure progress in meeting any in-sourcing goals. In commenting on a draft of this report, DOD partially concurred with our two recommendations. DOD’s comments are reprinted in appendix III. In written comments, DOD stated that there was nothing technically incorrect with our statements and findings. DOD noted that in-sourcing is one of many tools managers can use to shape the department’s workforce, and has enabled managers throughout the department to enhance internal capabilities, regain control and oversight of mission- critical functions, mitigate risks associated with over-reliance on contracted services, and generate efficiencies through resource realignment. DOD also stated, however, that the department was concerned that the challenges and problems identified in our report were not solely unique or attributable to in-sourcing, and that a lack of clarification on this point might unfairly cast unwarranted criticism on the use of in-sourcing as a tool available to government managers. We agree, and have noted in our report that the need for reliable data is not unique to in-sourcing decisions. However, while the challenges identified in our report regarding data reliability and alignment of reported data with strategic workforce plans may not be unique to in-sourcing, they can pose problems for evaluating the effects of in-sourcing as a tool for workforce management. DOD partially concurred with our first recommendation, to require components to establish a process to ensure the accuracy of in-sourcing data collected going forward. DOD stated that the challenges to data accuracy identified in our report are not unique to manpower requirements and billets established as a result of in-sourcing contracted services, adding that because the challenges are not unique to in- sourcing, they should not call into question the fundamental value and efficacy of in-sourcing. Our report does not call the value of in-sourcing into question. However, we believe that despite challenges to the accuracy of DOD data in other areas, reliable data on in-sourcing are necessary for oversight by decision makers in DOD and Congress. The department also noted that because time-sensitive in-sourcing decisions must often be made at the command or installation level, any certification and validation process should occur at that level. We agree and, as we stated in our recommendation, believe that the department should require that components establish a process to help ensure the accuracy of in- sourcing data, which does not preclude certification and validation by commands or installations. DOD also partially concurred with our second recommendation, to better align the data collected on in-sourcing with the department’s strategic workforce plans and establish metrics with which to measure progress in meeting any in-sourcing goals. The department stated that it has worked to align in-sourcing and strategic workforce planning efforts and that in- sourcing is one of many tools available to help close competency gaps and meet strategic workforce planning goals. However, the department further stated that in-sourcing should not be limited to areas identified in strategic workforce plans. We do not suggest in our report that in- sourcing should be limited to areas identified in strategic workforce plans, but believe that the effect that in-sourcing has in helping to achieve strategic workforce goals should be identified and reported as part of the oversight of the department’s strategic workforce management. DOD further stated that objectively measuring in-sourcing outcomes with traditional workload or personnel metrics is challenging because of unique, location-specific conditions related to missions, functions, and operating environments. In that regard, as we state in our report, DOD officials acknowledged that they had not established metrics to measure progress against the in-sourcing goal in the department’s most recent strategic workforce plan and that it would be difficult to use the available data to assess such progress. However, as our prior work has noted, a key principle of strategic workforce planning is monitoring and evaluating We note that without any metrics progress toward human capital goals.and measurements indicating the extent to which in-sourcing helped the department make progress toward strategic workforce goals, decision makers in DOD and Congress will be unable to assess the effect of the department’s in-sourcing actions in comparison with other actions it may take to manage the size and composition of the total workforce. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact us at (202) 512-3604 or [email protected], or (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To evaluate the extent to which the Department of Defense (DOD) reported on the items required by section 323 of the National Defense Authorization Act (NDAA) for Fiscal Year 2011, we reviewed DOD’s report on its fiscal year 2010 in-sourcing actions and compared it with the items specifically required by the legislation. Specifically, we ascertained the extent to which DOD reported on: (1) the agency or service of the department involved in the decision, (2) the basis and rationale for the decision, and (3) the number of contractor employees whose functions were converted to performance by DOD civilians. To better understand the data DOD reported, we reviewed DOD guidance on the in-sourcing decision-making process as well as statutes and regulations relating to in- sourcing, and met with officials of the Office of the Under Secretary of Defense for Personnel and Readiness (OUSD (P&R)) responsible for preparing the report, as well as officials in the departments of the Army, Navy, and Air Force responsible for submitting data for the report to OUSD (P&R). We focused our work on the military departments because together they constituted the majority of in-sourcing actions in fiscal year 2010. We analyzed the data contained in the report to identify patterns in the in-sourcing actions of the military departments, and met with representatives of each military department and the selected major commands to identify the reasons for those patterns. We used these data to portray the distribution of in-sourcing actions across the military departments and other DOD agencies, as well as the distribution of in- sourcing rationales in the military services and within certain major commands. For the purposes of this review, we selected a non-probability sample of commands from each military service, which included at a minimum the largest two commands in each service by volume of in- sourcing actions in fiscal year 2010. The sample of commands is not generalizable to all military department major commands. To determine the process DOD used to prepare the report and the extent to which the department assured itself of the reliability of the data, we reviewed our prior work on standards for internal control in the federal government. We also reviewed DOD guidance on the in-sourcing decision process. We analyzed the data contained in DOD’s report to identify patterns in the in-sourcing actions of the military departments, and met with officials of OUSD (P&R) in charge of preparing the report, as well as officials in the three military departments responsible for submitting in-sourcing data to OUSD (P&R), to identify the reasons for those patterns. As previously noted, we focused our work on the military departments because together they constituted the majority of in-sourcing actions in fiscal year 2010. We obtained and reviewed the in-sourcing data submitted by the military departments, and compared these data to the data in the report submitted to Congress. We also met with select major commands to determine their processes for assuring the reliability of the data they generated on in-sourcing actions, as well as certain other major commands with significant in-sourcing actions. We did not independently verify the data submitted for use in the report. We used these data to portray the distribution of in-sourcing actions across the military departments and other DOD agencies, as well as the distribution of in-sourcing rationales in the military services and within certain major commands. Although we found problems with some of the command- level data and are making a recommendation to this effect, we found the data to be sufficiently reliable for the purposes of providing broad percentages about in-sourcing actions. To determine the extent to which DOD’s fiscal year 2010 in-sourcing actions were aligned with the department’s recent strategic workforce plans, we reviewed our and the Office of Personnel Management’s prior work on strategic workforce planning. We compared the data in the report on fiscal year 2010 in-sourcing actions and in-sourcing data submitted by the three military departments with the department’s most recent strategic workforce plans (specifically, the 2009 update to the 2006-2010 strategic workforce plans). We also interviewed officials in OUSD (P&R) responsible for preparing both the in-sourcing report and the strategic workforce plans, and officials in the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics’ Office of Human Capital Initiatives responsible for the acquisition community’s strategic workforce plans. DOD organizations we contacted during audit work included the following: In the Office of the Secretary of Defense: Office of the Under Secretary of Defense (Personnel & Readiness) Office of the Under Secretary of Defense (Comptroller) Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) In the Department of the Air Force: Office of the Assistant Secretary of the Air Force (Manpower & Reserve Affairs) Headquarters Air Force Air Force Materiel Command In the Department of the Army: Office of the Assistant Secretary of the Army (Manpower & Reserve Affairs) Army Installation Management Command Army Medical Command In the Department of the Navy: Office of the Assistant Secretary of the Navy (Manpower & Reserve Affairs) Office of the Chief of Naval Operations Headquarters Marine Corps Navy Fleet Forces Command Naval Sea Systems Command Space and Naval Warfare Systems Command Marine Corps Systems Command We conducted this performance audit from May 2011 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. While section 323 of the FY11 NDAA did not require DOD to report cost data on in-sourcing and DOD’s September 2011 report to Congress did not include any cost-related information, DOD issued guidance to components on the methodology to use when making cost-based in- sourcing decisions, and the military departments collected and reported estimated cost information on their respective in-sourcing actions to varying degrees. DOD’s May 2009 in-sourcing guidance requires DOD components, in the case of work which is not determined to be inherently governmental or exempt from private sector performance and which can be performed by DOD civilians, to conduct a cost analysis to determine whether DOD civilian employees or the private sector would be the most cost-effective provider. In January 2010, DOD issued guidance on the methodology components should use to estimate the costs of in-sourcing actions when making cost-based in-sourcing decisions. Officials in the military departments told us that although the guidance was officially released in January 2010, the costing rules were available previously and so were used throughout fiscal year 2010. We found that the military departments took different approaches to collecting and reporting cost-related data associated with their fiscal year 2010 in-sourcing decisions. Specifically, the Air Force collected and reported cost estimate data for each in-sourcing action—including cost- based, inherently governmental, and exempt functions—to OUSD (P&R). The Department of the Navy collected and reported cost estimate data to OUSD (P&R) for most cost-based in-sourcing actions and some actions that were not cost-based. Specifically, the Navy reported cost estimate data on some, but not all, in-sourcing actions for functions that were deemed inherently governmental or exempt from private sector performance. The Army did not report any estimated cost data for in- sourcing decisions to OUSD (P&R). Brenda S. Farrell, (202) 512-3604 or [email protected]. Belva M. Martin, (202) 512-4841 or [email protected]. In addition to the contacts named above, key contributors to this report were Marion Gatling, Assistant Director; Randy DeLeon; Tim DiNapoli, Simon Hirschfeld; John Krump; Ramzi Nemo; Terry Richardson; and Erik Wilkins-McKee.
|
The Department of Defense (DOD) relies on contractors for varied functions, and obligated about $200 billion in fiscal year 2010 for contracted services. In-sourcingmoving contracted work to performance by DOD employeeshas been one tool through which DOD managed its workforce. The National Defense Authorization Act for Fiscal Year 2011 required DOD to report on its fiscal year 2010 in-sourcing decisions and required GAO to assess DODs report. The act required DOD to report, for each decision, the agency or service involved, the basis and rationale for the decision, and the number of contractor employees in-sourced. GAO assessed the report against these requirements and examined how DOD prepared the report and assured itself of the datas reliability, and the extent the in-sourcing actions were aligned with DODs strategic workforce plans. GAO reviewed the in-sourcing report, examined in-sourcing guidance, reviewed DODs recent strategic workforce plans, and interviewed appropriate department officials. DOD reported on two of three issues required by law: the component involved with each of its fiscal year 2010 in-sourcing actions and the rationale for each action. However, DOD did not report the number of contractor employees whose functions were in-sourced, because, DOD officials said, the department does not have these data. Specifically, the department noted, in its report to Congress, that it contracts for services and does not hire individual contractor employees. Instead, DOD reported the number of new civilian authorizations created due to in-sourcing. Congress has separately required DOD to report the number of contractor employees performing services for DOD, expressed as full-time equivalents, as part of its inventory of activities performed under contracts for services. In its in-sourcing report, DOD said that efforts to comply with this additional requirement may in the future help inform the number of contractor full-time equivalents in-sourced. The Office of the Under Secretary of Defense for Personnel and Readiness (OUSD (P&R)) requested information from DOD components on fiscal year 2010 in-sourcing actions to produce its report, and the military departments and OUSD (P&R) took varying, and in some instances limited, approaches to ensuring the datas reliability. Additionally, some of the commands GAO contacted made errors in reporting in-sourcing data. For example, 348 of 354 new in-sourcing authorizations by the Navys Fleet Forces Command were categorized as inherently governmental when they should have been categorized as exempt from private sector performance for continuity of infrastructure operations. Federal internal control standards state that data verification helps provide management with reasonable assurance of achieving agency objectives, including compliance with laws. Without accurate data, decision-makers in DOD and Congress may not have reliable information to help manage and oversee DOD in-sourcing. While the mandate did not require the in-sourcing report to align with DODs strategic workforce plans, it was unclear to what extent the in-sourcing actions aligned with DODs plan due to differences in the types of data used in the in-sourcing report and the most recent workforce plan, and the absence of metrics to measure the in-sourcing goal established in the plan. DOD took some steps toward aligning these efforts, such as establishing a goal for in-sourcing in its most recent strategic workforce plan, which was issued in March 2010. Additionally, OUSD (P&R) officials said that the in-sourcing actions furthered DODs strategic workforce objectives, but acknowledged they had not established metrics to measure against the in-sourcing goalwhich was to, among other things, optimize the departments workforce mix to maintain readiness and operational capability and ensure inherently governmental positions were performed by government employees. Additionally, the strategic workforce plans coded jobs by occupational series, such as budget analyst, while the in-sourcing report used function codes indicating broad areas of work, such as logistics. DOD officials told GAO there is no crosswalk between the two. GAO has previously reported that strategic workforce planning includes aligning human capital programs with programmatic goals. Without metrics and due to the differences in the data used, DOD and Congress may have limited insight on the extent to which in-sourcing actions met strategic workforce goals. GAO recommends that, for future in-sourcing actions, DOD (1) issue guidance to components on verifying in-sourcing data, and (2) better align in-sourcing data with strategic workforce plans and establish metrics to measure progress against in-sourcing goals. DOD partially concurred with the recommendations, but noted that the challenges identified in GAOs report are not unique to in-sourcing. GAO agrees, but believes actions are necessary to improve oversight of DODs in-sourcing.
|
Under Title II of the Food for Peace Act, the United States provides agricultural commodities to address famine and food crises in foreign countries. Between fiscal years 2007 and 2012, the United States spent about $9.2 billion to provide emergency food aid to 57 countries through its cooperating sponsors, such as the World Food Program, Catholic Relief Services, and Save the Children Federation. The United States annually spent between $1.1 billion and $2.0 billion to purchase and deliver about 1 to 2 million metric tons of emergency food aid to recipients in foreign countries. U. S. agencies spend the largest percentage of food aid funds to procure and transport commodities within the United States, and second largest on the transport, storage, and handling of commodities in recipient countries. Figure 1 illustrates this distribution of spending for fiscal year 2012. USAID and cooperating sponsors spent the third largest percentage of funds on ocean freight contracts to transport commodities across the ocean. USAID directed the remaining funds to cooperating sponsors to develop and manage programs and deliver commodities from discharge ports to landlocked recipient countries. Of the emergency food aid funds directed to cooperating sponsors, the World Food Program administered over 80 percent, while other cooperating sponsors administered the rest. USAID and cooperating sponsors contracted with seven ocean freight forwarders to act on behalf of USAID and its cooperating sponsors to manage ocean transportation logistics throughout the shipping process. USDA and USAID jointly manage Food for Peace emergency food aid procurement. USDA procures commodities for the program and USAID or its cooperating sponsors procure ocean transportation for the commodities. The agencies use a three-phase procurement process: (1) acquisition planning (reviewing and responding to food requests), (2) contract formation (procuring commodities and transportation), and (3) contract administration (delivering commodities and overseeing the process). See figure 2 for an overview of the process to procure packaged and bulk commodities and ocean freight. USAID’s standard procurement process for emergency food aid is to procure and ship commodities in response to a request from cooperating sponsors for a specific food emergency. However, USAID may also preposition commodities in domestic and overseas warehouses located near regions of the world with historically high emergency food aid needs. Using this approach, USAID requests commodities from USDA and ships them to a warehouse before a need for the commodities is identified. For the commodities shipped to warehouses, ocean freight vendors submit invoices directly to USAID for payment, and USDA retains ownership of the commodities until cooperating sponsors pick up the commodities from the warehouses, according to USDA. As a result, commodities in these warehouses remain on USDA’s accounting ledger as inventory assets until cooperating sponsors make requests and take possession of the commodities for their programs. Since USAID contracts for the management of its domestic and overseas warehouses, USDA requires USAID to provide it with information on the type, amount, and value of commodities in each warehouse’s inventory. In addition, USAID can divert commodities originally destined for a prepositioned warehouse to respond to a cooperating sponsor’s emergency food aid request. When diversion occurs, instead of delivering commodities to a prepositioned warehouse as planned, ocean freight vendors deliver to an alternate port that is specified on the ocean freight contract or to a cooperating sponsor at the same foreign port as the warehouse. As a result, the ship, foreign port, ocean bill of lading, and cost of ocean freight can change from what was originally agreed to under the initial ocean freight contract. The USDA’s Agricultural Marketing Service, which procures commodities for USDA’s domestic food aid programs, funded most of WBSCM’s design and currently manages the system. WBSCM replaced USDA’s ordering, procurement, and inventory systems, some of which USDA considered costly and outdated. These systems did not electronically manage the entire food aid procurement process, provide accurate inventory accounting, or track commodity shipments in real time. USDA also could not use the systems to electronically process invoices and payments. USDA expected WBSCM to integrate supply chain activities of up to 40,000 users, process requests for 4.5 million metric tons of domestic and international food aid each year, manage electronic contracting for commodities and freight, track inventory, pay vendors, and process claims. USDA intended that all stakeholders including those participating in international food assistance would use WBSCM as a supply chain management system. To do so, WBSCM uses data entered during earlier steps of the process to complete some of the later steps. Therefore, information needs to be entered in a sequential order for the later functions of the system to work correctly. USDA developed the business case for WBSCM in 2003 and awarded the contract for developing the software in October 2006. To identify the international food aid procurement functions, USDA established an interagency project team that included representatives from USDA’s Farm Service Agency (FSA) and Foreign Agriculture Service and USAID’s Food for Peace and Transportation Offices. The interagency team started to develop the system’s technical requirements for the food aid programs in 2007. Agencies began the initial testing phase of WBSCM in November 2009, and USDA began a phased rollout of WBSCM in June 2010, focusing primarily on domestic food aid procurement. USDA and USAID disagree about the usefulness of WBSCM to manage the entire emergency food aid procurement process. Although USDA created WBSCM with input from USAID, the system had deficiencies when it was implemented in April 2011. Since August 2011, USAID has not been using WBSCM as intended to procure ocean freight for bulk commodities, manage commodity inventory in preposition warehouses, or track food aid shipments. USAID manages its part of the procurement process using other systems not connected to WBSCM. USDA officials assert that, since March 2012, the agency has made significant improvements to WBSCM that would address many of the problems that led to USAID’s discontinued use of the system’s functions to procure bulk commodity ocean freight, manage commodity inventory, and track food aid shipments. WBSCM had deficiencies when it was implemented in April 2011, so USAID gradually discontinued using it to procure ocean freight for bulk commodities, manage commodities for prepositioned warehouses, and track food aid shipments between August 2011 and June 2012. When USDA, USAID, and several international stakeholders, such as USAID’s cooperating sponsors and freight forwarders, started to use WBSCM, they immediately encountered significant problems. The USDA’s Foreign Agricultural Service reported in an April 2011 memo that WBSCM might not be ready to handle complex international food aid procurements involving multiple delivery points and commodities. In June 2011, USAID informed USDA in a letter that its stakeholders found WBSCM time consuming to use. In addition, USAID stated that the system’s process to procure international bulk ocean freight was not compatible with USAID’s process and recommended that neither agency use WBSCM to procure bulk ocean freight. For example, USAID informed USDA that WBSCM could not account for key information—such as current market conditions, available funding, alternate foreign ports, and available ships with sufficient cargo space—that USAID needed to negotiate with vendors when procuring ocean freight for bulk commodities. In addition, USAID and its stakeholders experienced substantial performance problems and indicated that the system was cumbersome and not user friendly. In a July 2011 memo, USDA acknowledged deficiencies in WBSCM and stated it was working to fix problems. For example, the USDA memo acknowledged USAID’s difficulties in using WBSCM to procure bulk freight. Moreover, freight forwarders that manage ocean freight logistics for the cooperating sponsors identified concerns they had about the additional workload WBSCM required of them. For example, they noted in June 2011 that WBSCM required them to enter data historically completed by stevedores (contractors who load or unload commodities) at U.S. load ports, which increased the freight forwarders’ workload and created confusion about what commodity was being delivered, loaded, and unloaded at U.S. load ports. Furthermore, WBSCM’s initial inability to track food aid shipments, such as recording when a shipment had been diverted before the commodities arrived in a foreign location, posed a problem for freight forwarders. As a result, the freight forwarders experienced unnecessary delays in coordinating information with ocean freight vendors and providing these vendors with required instructions for shipping and unloading cargoes. As a result of difficulties in using WBSCM, USAID discontinued using it for certain functions. In August 2011, USAID issued a memo to cooperating sponsors, freight forwarders, ocean transportation providers and their brokers stating that, effective immediately, all bid offers for bulk ocean freight transportation shipped under the Title II program did not need to be submitted through WBSCM. USAID also decided it would not use WBSCM to manage its prepositioned commodity inventories. At that time, a USAID official informed USDA that USAID had a number of issues with WBSCM that needed to be addressed for it to effectively manage prepositioned commodity inventories using the system. The official further stated that until these issues were fixed, USAID would continue to work outside WBSCM to manage these inventories. In response to concerns expressed about WBSCM, USDA and USAID announced the formation of an interagency team in November 2011 to address international stakeholders’ concerns with WBSCM. At that time, the WBSCM program manager acknowledged that USDA had been unable to implement timely solutions for the problems that the international stakeholders had faced and that WBSCM had been designed with insufficient input from the international stakeholders. During November and December 2011, the agencies held meetings to identify needs and priorities, created an action list, and assigned individual USDA and USAID officials to specific tasks on the list. Despite these efforts, USAID and international stakeholders continued to have concerns about using WBSCM. USAID eventually decided in February 2012 that WBSCM should only be used for USDA to procure commodities. As a result, USAID awards ocean freight contracts for bulk and packaged commodities outside of WBSCM. Furthermore, in June 2012, USAID and USDA agreed that freight forwarders did not need to use WBSCM to update ocean freight contracts. Rather, freight forwarders continue to update ocean freight contracts in their own separate systems. USAID and its international stakeholders currently perform most of the functions required to procure bulk ocean freight in systems not connected to WBSCM. For example, USAID receives bulk freight bids via email and negotiates bulk ocean freight contracts directly with vendors. According to USDA officials, USDA then manually enters the awarded ocean freight contract information for bulk commodities in WBSCM. In addition, USAID officials said they do not use WBSCM to record changes to ocean freight. As a result, USDA does not receive updates in WBSCM when USAID makes changes to the shipment, or when freight forwarders divert commodities on USAID’s behalf. USAID officials also said they do not use WBSCM to track inventory of prepositioned commodities because key information required to track commodities from the U.S. load port to overseas warehouses is not entered in WBSCM. As we noted above, USAID informed USDA in August 2011 that the agency would continue to track and allocate prepositioned commodity inventory using other systems. Instead of using WBSCM, USAID officials track prepositioned commodity inventory using spreadsheets with information from USAID’s contractors at its prepositioned warehouses. USAID officials send USDA a consolidated spreadsheet, and USDA officials manually enter inventory data in WBSCM to provide food aid information for the Commodity Credit Corporation’s (CCC) quarterly financial statements. USDA officials have stated that, since March 2012, the agency has made significant improvements to WBSCM that address many of its difficulties. During the course of our review, USDA officials stated that several of WBSCM’s shortcomings, such as poor functionality and ease of use, have been addressed. They also noted that USDA’s Foreign Agriculture Service currently uses WBSCM to manage all procurement functions for its international food aid programs and requires stakeholders to manage aspects of the programs, including updating ocean freight information, in WBSCM. In addition, USDA officials noted that they have modified the inventory management function of WBSCM that could be used to track prepositioned inventory. They also said that they have modified WBSCM sufficiently to facilitate tracking changes in freight information. For example, these officials said that WBSCM had been modified so that it can capture needed information when a shipment has been divided to ship on multiple vessels. According to USDA officials, the agency would like USAID to resume fully using WBSCM, which would address USDA concerns regarding the lack of current information about shipments after they depart U.S. ports and inventory of prepositioned commodities. USAID officials from the Food for Peace and Transportation Offices indicated in August 2013 that they are currently able to manage their portion of the emergency food aid program without expanding their use of WBSCM. They also indicated that they would consider using WBSCM again if USDA made substantial improvements that addressed their concerns. However, as of February 2014, USAID had not tested WBSCM’s ability to manage prepositioned commodity inventory and track food aid shipments since USDA made changes to the system. Since USAID uses systems outside of WBSCM, USAID and USDA lack complete and accurate information on individual food aid shipments, which, in turn, hinders USDA’s ability to use WBSCM to prepare accurate financial reports and recover U.S. government funds. For example, USAID’s systems cannot provide tracking information on some food aid shipments. In addition, USAID’s current systems for tracking the inventory of commodities that it has in prepositioning warehouses lack sufficient internal controls, according to the USAID Inspector General, thus hindering the agencies’ abilities to verify commodity inventories. USAID’s data collection outside of WBSCM also makes it more difficult for USDA to efficiently file claims to recover U.S. government funds. In our work for a recent report on the impact of prepositioning on the timeliness of emergency food aid, we found that some information on emergency food aid shipments in WBSCM could not be used to assess their delivery timeframes. USAID’s Office of Food for Peace informed USDA in a February 2012 letter that USAID would rely on its Transportation Office and the freight forwarders to track and periodically provide information on shipments. In addition, in June 2012 USAID informed freight forwarders that they no longer were expected to update tracking information for emergency food aid shipments into WBSCM. However, in our related GAO report on prepositioning food aid, we found that USAID does not maintain tracking data on food aid shipments that would allow it to assess the timeliness of deliveries. Rather, the freight forwarders maintain tracking data in their own, separate systems from which USAID can request data. USDA has expressed concerns that it does not have complete and accurate information it needs and it is unclear whether USAID has sufficient internal controls over the data that freight forwarders collect on individual shipments of emergency food aid. GAO’s Standards for Internal Control requires that agencies implement appropriate control activities, such as ensuring that transactions are recorded in a complete and accurate manner. USDA’s FSA issued a memo in April 2012 outlining the implications of USAID’s decision in February 2012 to not use WBSCM to track food aid shipments. Specifically, FSA stated that administering activities outside of WBSCM would increase the number of errors, be more labor intensive, and create additional strain on FSA’s resources. FSA also stated that by not using these features of WBSCM, USAID would limit the government’s ability to track commodities from initial order through final delivery. In addition, FSA officials have indicated that USAID’s decision to stop requiring freight forwarders to enter updated freight award information in WBSCM affects the accuracy of ocean freight information in WBSCM, such as the price paid to transport commodities; the foreign destination port; and the vessel used. FSA officials noted that this is information they need to prepare reports on food aid shipments. In addition, USAID has not provided guidance on the information that USAID and USDA need collected on individual shipments. show dates that approximate the dates when the cooperating sponsors requested the food from USAID. A recent USAID inspector general report and independent audits of the Commodity Credit Corporation (CCC) raise concerns that USAID and USDA’s current systems for tracking commodities that USAID has prepositioned in warehouses may have insufficient internal controls, hindering the agencies’ abilities to verify commodity inventories. Similarly, we found that USAID’s commodity inventory spreadsheets contained missing data and incorrect formulas to compile inventory information. A January 2013 USAID inspector general report found that internal control weaknesses in USAID’s prepositioning inventory records caused irreconcilable discrepancies between the agency’s records and warehouse inventory records. According to USAID’s Automated Directives System, contract officer representatives must ensure that the cooperating sponsor is performing in accordance with the terms contained in the contract. However, the inspector general found that USAID’s Transportation Office, foreign warehouses and others did not have an adequate system in place to consistently identify and resolve discrepancies in inventory records. Specifically, the audit found that the amount of incoming and outgoing food at the Djibouti warehouse could not be reconciled within the threshold established in the warehouse contract. The audit stated that commodity differences can result from changes in shipments or commodity losses. It also noted that discrepancies in records resulted from a number of factors, including incomplete records and incomplete survey reports conducted by an independent contractor monitoring warehouse activities. According to the report, reconciliations throughout the supply chain would improve the system’s reliability and allow USAID to evaluate the performance of warehouse contractors accurately. The inspector general recommended that USAID implement a system of internal controls to reconcile its records regularly with reports from warehouses and outgoing shipment reports to identify and resolve differences in a timely manner. Furthermore, independent audits of CCC’s financial activities showed instances where USDA had insufficient internal controls to verify commodity information that it received from USAID; as a result, the independent audits reported significant deficiencies in CCC’s financial statements. GAO’s Standards for Internal Control state that control activities, such as reviews of management at the functional or activity level, are an integral part of an entity’s accountability for government resources. However, the audits reported USDA’s controls over foreign prepositioned inventory distributed by USAID as a significant deficiency in CCC’s financial statements for fiscal years 2012 and 2013. Specifically, the fiscal year 2012 audit noted that the process USDA used to determine the amount of inventory on hand at foreign warehouses consisted only of an email sent from USAID to USDA outlining the levels of inventory by commodity. It found that CCC had no evidence of adequate controls or policy and procedures over the warehouse inventory information to determine whether the balances reported by USAID are reasonable prior to entries by CCC. During the course of our review, USDA officials responsible for entering inventory information into WBSCM to produce CCC’s financial reports confirmed that they cannot verify the inventory information they receive from USAID. In addition, we found that some of the spreadsheets that USAID used to compile inventory information were missing data and contained incorrect formulas. We reviewed spreadsheets for 6 warehouses where USAID prepositions commodities and found errors in 3 of them. For example, we found that the spreadsheet that the contractors used to track inventory at the prepositioning warehouse in Djibouti contained errors. It showed eight instances in which contractors documented commodities leaving the warehouse before the same commodities were documented as entering the warehouse. We also found two instances in which contractors recorded commodities leaving the warehouse 4 months after they had left the warehouse. To determine inventory levels, contractors at each warehouse submit inventory information to USAID, which consolidates the information into a single spreadsheet that it provides to USDA. As a result, inaccuracies in a single warehouse inventory spreadsheet create inaccuracies in USAID’s consolidated spreadsheets. USDA officials enter information provided by USAID into WBSCM to generate various reports, including CCC’s quarterly financial statements to the Office of Management and Budget. USAID’s data collection outside WBSCM impedes USDA’s process to file claims against ocean freight vendors and recover U.S. funds. When an ocean freight vendor loses or damages a portion of the commodities loaded onto its vessels, the vendor is liable for the funds it received to transport that portion of the shipment in addition to the value of the lost or damaged commodities. According to USDA officials, USDA filed 131 claims against vendors in fiscal year 2012, valued at $1.2 million. Since USAID and cooperating sponsors use freight forwarders to manage logistics for ocean freight services, USDA requires information from USAID, cooperating sponsors, and freight forwarders to process ocean freight claims. USDA’s ability to file claims against ocean freight vendors depends on having accurate information about changes that USAID makes to ocean freight shipments after contracts are awarded. USAID can authorize changes in a shipment that can result in changes to both deliveries and the amounts USAID may pay to an ocean freight vendor. For example, USAID may approve the substitution of one ocean freight vessel for another or split a shipment onto two or more vessels. We found that USAID diverted 60 percent of shipments destined for prepositioning warehouses between fiscal year 2007 and 2012, potentially incurring freight rates different from what the vendor would have charged to deliver the commodities to the original destinations. According to USDA officials, since freight forwarders do not regularly update freight information in WBSCM, USDA officials must do additional work to determine changes made to each freight award. Additionally, USDA officials process ocean freight claims using information in WBSCM, but they upload information they need to process claims themselves since USAID and freight forwarders do not use the system to track shipments or store freight information. USDA’s ability to file claims against ocean freight vendors depends on having accurate information on ocean freight shipment details, confirmation of each delivery, and the amounts that USAID pays for ocean freight in WBSCM. According to USDA officials, WBSCM did not have the capability to process ocean freight claims when the system was first implemented, so USDA requested USAID, cooperating sponsors, freight forwarders, and others to submit claims documentation outside of WBSCM. Although USDA officials say claims could be processed through WBSCM since November 2012, USAID and freight forwarders continue to provide documents via email, and regular mail instead of uploading documents into WBSCM. USDA’s guidance also required freight forwarders to provide freight contracts and discharge/delivery surveys, among other documents. In addition, the guidance required cooperating sponsors to sign off their rights to have USDA to pursue a claim on their behalf. accurate information on the amount USAID actually paid. During the course of GAO’s work, USAID contacted USDA and made arrangements to provide the necessary documents to assist USDA in its claims process. As of February 2014, USAID had provided USDA with at least 190 freight vouchers to process ocean freight claims since July 2013. USDA and USAID have been unable to collaborate effectively to resolve their disagreement on the suitability of WBSCM for emergency food aid procurement. In prior work, we have identified practices that can enhance and sustain collaboration among federal agencies, thereby improving performance and results. Although USDA’s and USAID’s collaborative efforts have incorporated some of these elements to develop WBSCM, they have not incorporated others. Nevertheless, an upcoming functional upgrade of WBSCM offers the agencies an opportunity to make substantial, mutually agreeable changes to WBSCM. USDA and USAID have not collaborated effectively to resolve their disagreement on the suitability of WBSCM for emergency food aid procurement. Although USDA’s and USAID’s efforts have incorporated some practices to enhance and sustain collaboration they have not incorporated others. As we mentioned above, the agencies held weekly meetings to identify needs and priorities, created an action list, and assigned relevant individual USDA and USAID officials to specific tasks on the list between November and December 2011. However, by March 2012, the agencies could not agree on how to move forward with using WBSCM to procure bulk ocean freight and USAID has not tested changes that USDA has made to the system since March 2012. Specifically, USDA and USAID do not agree on the roles and responsibilities of key participants in the international food aid procurement process, do not share a defined outcome for their collaboration, and do not have a written agreement stating how the agencies will collaborate. Clarify roles and responsibilities. While USDA and USAID have designated some roles and responsibilities between themselves, they do not agree on the role and responsibilities that the freight forwarders should have in entering and updating information in WBSCM. Our prior work found that a key factor of effective collaboration is that agencies clarify the roles and responsibilities of those participating in the collaborative effort. During the design phase of WBSCM, the agencies assigned the freight forwarders responsibility for entering information about individual shipments and updating the information whenever there were changes. However, freight forwarders we interviewed have said they were not consulted by either agency and, after WBSCM was rolled out, they expressed concerns about the amount of information they needed to update, according to agency documents and officials. They noted that entering such information into WBSCM was redundant with their own systems, cumbersome, and time consuming given the system problems WBSCM was experiencing. As we previously noted, USAID informed USDA in June 2012 that they would no longer require the freight forwarders to update shipment information in WBSCM for USAID’s international food aid shipments. The lack of updated information has created challenges for USDA in processing claims and ensuring that they have accurate inventory information. Agree on common goals. USDA and USAID established some joint action items for WBSCM, but they do not agree on a common goal for its further development and use of WBSCM. Our prior work has found that most experts we interviewed in collaborative resource management emphasize the importance of collaborative groups having clear goals. Experts noted that participants may not have the same interests, but establishing common goals provides them a reason to participate in the process. After the rollout, as problems arose, USDA and USAID agreed that USAID would suspend using the system for bulk freight procurement, and then for tracking prepositioned food aid inventory. They also agreed that freight forwarders would not be required to update freight information as changes in the shipments occurred. While the agencies collaborated to resolve some problems, such as creating a role for the stevedores in WBSCM, they have not resolved whether USAID would resume trying to use WBSCM fully for bulk freight procurement, tracking and updating information on shipments, and recording prepositioned food aid inventory. As we noted above, as of February 2014, USAID had not agreed to test the changes that USDA made to the system since March 2012 to improve its performance and functionality to procure emergency food aid. In its response to this report, USAID stated that it would test the international procurement functions of WBSCM. Written collaborative agreements. USDA and USAID have not documented how they would further develop and use WBSCM, or how they would resolve their outstanding issues. Our prior work found that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively. As we have previously reported, having a clear and compelling rationale to work together is a key factor in successful collaborations. Agencies can overcome significant differences when such a rationale and commitment exist. USDA and USAID have a memorandum of understanding (MOU) concerning the emergency food aid program that FSA and USAID signed in 1991. FSA and USAID have drafted, but not yet signed, an updated MOU that USDA’s General Counsel is currently reviewing. According to USDA officials, both MOUs generally cover USAID’s and USDA’s roles and responsibilities in carrying out emergency food aid operations. However, neither MOU specifies the roles and responsibilities of either agency or desired outcome regarding their collaboration on WBSCM. USDA officials said in August 2013 that they have addressed several of WBSCM’s performance and functional issues; however, they note that reconfiguring WBSCM to fully address USAID’s needs will require additional technical and functional upgrades. In September 2013, USDA began a technical upgrade of WBSCM to replace some system components and software that are obsolete and are increasingly difficult to maintain. According to USDA officials, the technical upgrade should also result in some benefits for users, such as improved performance, and allow users to better segregate and identify the cost of split shipments. In addition, the technical upgrade will enable users to access the system using the most current versions of Internet Explorer, which is a widely used Internet browser. A USDA official cautioned, however, that because USDA has just begun making this upgrade, it cannot be sure of the full scope of the improvements that will occur. The technical upgrade will be the foundation for a more extensive, functional upgrade that USDA plans to conduct in fiscal years 2015 through 2017, according to USDA officials. To make WBSCM a more effective system, USDA intends to change the functionality of WBSCM to align with current commercial business practices. Therefore, as part of the functional upgrade effort, USDA plans to examine domestic and international commercial food business trends and best practices and the extent to which its food aid procurement processes reflect them. According to USDA officials, the planned functional upgrade would provide an opportunity to fully address USAID’s concerns. For example, the functional upgrade could allow USDA to re-configure WBSCM so that it fits USAID’s bulk ocean freight procurement needs. As mentioned previously, USAID officials told us that they are willing to use WBSCM to procure bulk ocean freight and manage inventory if USDA is able to address the deficiencies that they and others have identified. USDA and USAID have joint responsibility to carry out the Food for Peace program and respond to global emergency food crises. Toward that end, USDA developed a web-based system with USAID’s input to manage the procurement of food aid. However, because of WBSCM’s deficiencies, USAID discontinued use of the system starting in 2011. The agencies are currently at an impasse. USDA has made modifications to the system, but it is unclear if these would fully respond to USAID’s concerns, and USAID has not tested the modifications. Their continued disagreement on the usefulness of WBSCM is hindering USDA’s ability to prepare accurate financial reports and efficiently file claims to recover funds. USDA’s planned major upgrade of the system affords USDA and USAID an opportunity to revisit their collaboration on WBSCM and improve the system so that it meets the needs of all users and to ensure that USDA has reliable and accurate data to prepare its financial statements and account for U.S. government funds. To improve the efficiency and accountability of the emergency food aid procurement process, we recommend the Secretary of Agriculture and Administrator of USAID direct their staffs to work together to take steps to: improve USDA’s ability to account for U.S. government funds by ensuring that USAID provides USDA with accurate prepositioned commodity inventory data that USDA can independently verify; and assess WBSCM’s functionality by testing the international procurement functions that have been modified since April 2011 and documenting the results. In preparation for WBSCM’s functional upgrade, we recommend the Secretary of Agriculture and Administrator of USAID direct their staffs to work together to take steps to: develop a written agreement signed by both agencies that clearly outlines the desired outcomes of their collaboration and the roles and responsibilities of participants, such as freight forwarders. We provided a draft of this report to USDA and USAID for comment. USDA and USAID provided written comments on the draft, which are reprinted in appendixes II and III, respectively. We also received technical comments from USDA and USAID, which we incorporated throughout our report as appropriate. USDA generally agreed with our recommendations and expressed willingness to continue to initiate improvements in the efficiency and accountability of the emergency food aid procurement process. USAID agreed to test WBSCM’s current functionality and to clarify the roles and responsibilities of participants in a written agreement with USDA. Regarding our recommendation to ensure that USAID provides USDA with accurate prepositioned commodity inventory data that can be independently verified, USAID stated it is of the view that commodities move off USDA’s books and onto those of USAID when a contractor takes possession, on USAID’s behalf, of the commodities in question. However, according to USDA and its independent auditor, the CCC retains the ownership of the commodity inventory until USAID distributes it from the warehouse. USAID did not comment on the concerns we identified about the quality of USAID’s internal controls and inventory data, which need to be addressed regardless of which agency includes the data in its financial reporting. In addition to providing copies of this report to your offices, we will send copies to interested congressional committees, the Secretary of Agriculture, and the Administrator of USAID. We will make copies available to others on request. In addition, the report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our review focused on the procurement process for the Title II emergency food aid program managed by the U.S. Department of Agriculture (USDA) and the U.S. Agency for International Development (USAID). This report examines (1) the extent to which agencies agree to use WBSCM to manage the international emergency food aid procurement process; (2) how the agencies’ use of WBSCM and other systems affects USDA’s ability to have accurate information about emergency international food aid shipments; and (3) the extent to which the agencies are collaborating on how to use WBSCM. To provide context and background on the Title II emergency food aid procurement process, we analyzed the total food aid expenditures and tonnage data for fiscal years 2007 through 2012. The data that we obtained included USAID commodity and ocean freight funding data for fiscal years 2007 through 2012. These data were presented for context and background purposes only, and we determined they were reliable for our purposes. We also analyzed budget processes, financial reporting requirements, contract documents, cooperating sponsor cooperative agreements and transfer authorizations, legislation, acquisition regulations, and past GAO food aid reports to understand the emergency food aid procurement process. To examine the extent to which the two agencies use the Web Based Supply Chain Management system (WBSCM) to manage the international emergency process to procure commodities and ocean freight transportation, we interviewed USDA and USAID officials and reviewed documentation. At USDA, we met with officials of the Agricultural Marketing Service; Foreign Agricultural Service, and Farm Service Agency, including officials in the Farm Service Agency’s Commodity Operations Office in Kansas City, Missouri. At USAID, we met with officials of the Office of Food for Peace and the Transportation Office in Washington, D.C. Documents we reviewed included agency procurement schedules, organizational charts, policies and procedures, and flow charts, as well as contract documents, legislation, and acquisition regulations. We also obtained and analyzed USDA and USAID memos, briefings, and newsletters that discuss what WBSCM was designed to do and concerns surrounding the use of WBSCM. We observed how both agencies use WBSCM, and USAID officials explained or demonstrated some of the systems that they use separately from WBSCM to procure and manage emergency food aid, including USAID’s Food for Peace Management Information System. In addition to conducting interviews with agency officials and obtaining agency documentation, we conducted interviews with and obtained documentation from officials of the World Food Program and private voluntary organizations. We also interviewed and obtained documentation from six of the seven freight forwarders that manage the transportation of food aid from the U.S. load port to the foreign destination or overseas prepositioned warehouse port. We did not test the system ourselves because it was beyond the scope of this review to simulate multiple food aid procurements, including the variances that occur during the shipping process, and simulate the interface for multiple users. We did not assess the extent to which USDA uses WBSCM for its domestic and international food aid programs, or whether WBSCM performs these functions effectively because that was beyond the scope of this engagement. To examine how the agencies use of WBSCM and other systems affects USDA’s ability to have accurate information as well as USDA’s ability to efficiently recover U.S. government funds from lost or damaged commodities during ocean transit, we interviewed officials from USDA’s Commodity Operations Office and USAID’s Office of Food for Peace, Transportation Office and Office of the Chief Financial Officer. To examine how USAID’s use of systems outside of WBSCM affects USDA’s ability to report and account for U.S. government funds, we obtained and analyzed USDA and USAID documentation and spreadsheets used to track foreign prepositioned commodity inventory levels and how these are used to provide information for financial statements. The documentation that we obtained and analyzed included warehouse inventory reports prepared by each of the contractors that manage USAID’s foreign prepositioned warehouses, as well as a consolidated warehouse inventory spreadsheet that USAID has provided to USDA to comply with USDA’s request for inventory data to use in complying with applicable financial reporting requirements. We also drew on work conducted for a recent report to describe the systems that USAID relies upon to track emergency food aid shipments. For that report, we collected data on emergency food aid shipments from fiscal years 2007 through 2012 from the World Food Program and six freight forwarders. We assessed the reliability of these data by asking the World Food Program and the six freight forwarders that manage the transportation of Title II emergency food aid for cooperating sponsors how they collected the data, the quality checks that they perform, and the internal controls they have in place to ensure the accuracy of the data. We also tested some of the data for missing data, outliers, and obvious errors. In total, we assessed the records for 5,142 emergency food aid shipments during this period. Based on our assessments, we found that data for 1,357 shipments were not sufficiently reliable to determine how prepositioning and diverting food aid affects the timeliness of food aid deliveries. See appendix II in GAO-14-277 for the results of this analysis. To understand the applicable quarterly financial reporting requirements that USDA must address to account for overseas prepositioned food aid inventories, we analyzed how Congress appropriates funding for this program. We also obtained and analyzed applicable Office of Management and Budget (OMB) circulars, such as the OMB Circular A- 136 and other federal regulations. We also identified the internal control requirements applicable to federal agencies. These are contained in GAO’s Standards for Internal Control in the Federal Government. We compared the internal control standards identified in this report with USDA and USAID’s actions in the procurement process. We also reviewed internal control deficiencies identified in a January 2013 USAID regional inspector general report, as well as in audits of the Commodity Credit Corporation prepared by an independent auditor. We did not independently audit the Commodity Credit Corporation’s financial statements, but the USDA Inspector General did not find instances where the auditor did not comply, in all material respects, with government auditing standards and the OMB Bulletin 07-04, Audit Requirements for Federal Financial Statements, as amended. To assess the extent to which the use of WBSCM and other systems affects USDA’s ability to process ocean freight claims and efficiently recover U.S. government funds, we obtained and analyzed USDA and USAID documents and emails that describe the claims documentation submission process that the two agencies agreed to implement in June 2012. We also examined the extent to which USDA and USAID are sharing claims-related information by analyzing e-mails and freight vouchers submitted by USAID to USDA from June 2013 through February 2014. To examine the extent to which USDA and USAID collaborate on how to use WBSCM, we obtained documentation from both agencies and developed a timeline of events. The documentation obtained included annual performance reports, letters, memos, emails, information bulletins, updates to USDA’s risk management plan, and meeting notes. We also obtained a copy of a post-implementation review conducted by an independent contractor that documented a number of findings related to WBSCM. We interviewed officials from the USDA’s Agriculture Marketing Service, Foreign Agriculture Service, and Farm Service Agency, as well as officials from USAID’s Food for Peace and Transportation Offices. We also interviewed six of the seven freight forwarders that manage the transportation of food aid for USAID and cooperating sponsors. After reviewing GAO’s 2005 and 2012 reports on practices to improve interagency collaboration, we compared the practices with USDA and USAID actions from April 2011 to August 2013 and determined that five of the practices were the most relevant in describing the agencies’ efforts to resolve their concerns about WBSCM. We determined that the two agencies did follow two leading practices. However, we also determined that there were three collaboration practices that they had not followed. Our 2005 and 2012 reports highlighted the need for agencies’ to document their agreement to collaborate through memorandums of understanding or other similar documents. Also, our reports demonstrated the benefits of ensuring that collaborating agencies agree and are clear with respect to their roles and responsibilities. This includes agreeing on the role of the contractors that implement the program. In addition, our reports highlighted the importance of ensuring that collaborating agencies have clearly defined goals. We conducted this performance audit from March 2013 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. USAID stated it is of the view that commodities move off USDA’s books and onto those of USAID when a USAID contractor takes possession of the commodities in question. However, according to USDA and its independent auditor, CCC retains ownership of the inventory when it is in USAID’s possession. Nevertheless, we identified concerns about the quality of USAID’s internal controls and inventory data, which need to be addressed regardless of which agency includes the data in its financial reporting. In addition to the person named above, Valérie L. Nowak (Assistant Director), Rhonda Horried, Mark Needham, José M. Peña III, Ashley Chaifetz, Fang He, Martin De Alteriis, Mark Dowling, Etana Finkler, Karen Deans, and Carol Bray made significant contributions to this report. International Food Aid: Prepositioning Speeds Delivery of Emergency Aid, but Additional Monitoring of Time Frames and Costs Is Needed. GAO-14- 277. Washington, D.C.: March 5, 2014. World Food Program: Stronger Controls Needed in High-Risk Areas. GAO-12-790. Washington, D.C.: September 13, 2012. Farm Bill: Issues to Consider for Reauthorization. GAO-12-338SP. Washington, D.C.: April 24, 2012. USDA Systems Modernization: Management and Oversight Improvements Are Needed. GAO-11-586. Washington D.C.: July 20, 2011. International Food Assistance: Better Nutrition and Quality Control Can Further Improve U.S. Food Aid. GAO-11-491. Washington, D.C.: May 12, 2011. International Food Assistance: A U.S. Governmentwide Strategy Could Accelerate Progress toward Global Food Security. GAO-10-212T. Washington, D.C.: October 29, 2009. International Food Assistance: Key Issues for Congressional Oversight. GAO-09-977SP. Washington, D.C.: September 30, 2009. Foreign Assistance: Various Challenges Limit the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-905T. Washington, D.C.: May 24, 2007. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 13, 2007.
|
USDA and USAID spent about $9.2 billion to provide international emergency food aid during fiscal years 2007-2012. USDA developed WBSCM with USAID's input to manage domestic and international food aid procurements. USDA spent about $187 million to develop and implement the system. GAO was asked to examine the international emergency food aid procurement process. This report examines (1) the extent to which agencies agree to use WBSCM to manage the process, (2) how the agencies' use of WBSCM and other systems affects USDA's ability to have accurate information, and (3) the extent to which the agencies are collaborating on how to use WBSCM. GAO reviewed the procurement process and observed WBSCM in use. We analyzed inventory spreadsheets used to compile USDA's financial reports. We compared agencies' efforts to collaborate against key elements for effective interagency collaboration. Although the U.S. Department of Agriculture (USDA) and U.S. Agency for International Development (USAID) jointly manage international emergency food aid procurement, the agencies disagree about the usefulness of the Web Based Supply Chain Management system (WBSCM) to manage the entire process. WBSCM had significant deficiencies when it was implemented in April 2011, which led USAID to discontinue using it to procure ocean freight for bulk commodities, manage prepositioned or stockpiled commodity inventory, and track food aid shipments. For example, WBSCM was slow and time consuming to use and its process to procure ocean freight for bulk commodities was not compatible with USAID's process to negotiate contracts with ocean freight vendors. USDA currently uses WBSCM to procure food aid commodities, while USAID procures ocean freight using other systems not connected to WBSCM. Since March 2012, USDA has made changes to WBSCM, and USDA officials assert that these changes address some of the problems that led to USAID's decision to discontinue use of the system. Since USAID uses systems outside of WBSCM, USAID and USDA lack information on individual food aid shipments, which, in turn, hinders USDA's ability to use WBSCM to prepare reports and efficiently file claims against ocean carriers to recover U.S. government funds. GAO's Standards for Internal Control in the Federal Government state that information should be accurately recorded and communicated to those who need it and in a form that enables them to carry out their internal control and other responsibilities. USAID relies on freight forwarders to track and periodically provide information on shipments. In GAO's work for a recent report, we found that freight forwarders did not collect complete or consistent information on emergency food aid shipments. Without accurate information from its freight forwarders, USAID is limited in its ability to generate accurate information on food aid shipments. In addition, GAO found that USAID and its warehouse contractors did not always accurately record all prepositioned commodity inventory transactions. USAID provides this potentially inaccurate information to USDA officials who enter this information into WBSCM to generate quarterly financial statements. Moreover, USAID's data collection outside WBSCM makes it more difficult for USDA to file claims efficiently against ocean freight vendors and recover U.S. funds because USDA officials must manually enter USAID information. According to USDA officials, USDA filed 131 such claims in fiscal year 2012 valued at $1.2 million. USDA and USAID are not collaborating effectively to resolve their disagreement on the usefulness of WBSCM. In prior work, GAO identified key elements of effective collaboration that can enhance and sustain collaboration among federal agencies. Although USDA and USAID's collaborative efforts have incorporated some of these elements to develop WBSCM, they have not incorporated others. Specifically, USDA and USAID do not agree on the roles and responsibilities of key participants in the process, do not share a defined outcome for their collaboration, and do not have a written agreement stating how the agencies will collaborate. An upcoming functional upgrade of WBSCM offers an opportunity to make substantial changes that are mutually agreeable. GAO recommended the agencies work together to ensure USDA receives accurate prepositioned inventory data, improve WBSCM's functionality by testing modified functions, and develop a written agreement that clearly outlines outcomes and roles and responsibilities for using WBSCM. USAID noted its view that prepositioned commodities move off USDA's books and onto those of USAID but agreed in general with our other two recommendations. USDA agreed with our recommendations and stated that the Commodity Credit Corporation retains ownership of prepositioned commodities.
|
The Colville and Spokane Indian reservations were established in 1872 and 1877, respectively, on land that was later included in the state of Washington. The Colville Reservation, of approximately 1.4 million acres, was created on July 2, 1872, through an executive order issued by President Grant. The Spokane Reservation, of approximately 155,000 acres, was created by an agreement between agents of the federal government and certain Spokane chiefs on August 18, 1877. President Hayes’ executive order of January 18, 1881, confirmed the 1877 agreement. In 2001, the Colville and Spokane tribes had enrolled populations of 8,842 and 2,305, respectively. The Indian Claim Commission was created on August 13, 1946, to adjudicate Indian claims, including “claims based upon fair and honorable dealings that are not recognized by any existing rule of law or equity.”Under section 12 of the act that created the Commission, all claims had to be filed within 5 years. Ultimately 370 petitions, which were eventually separated into 617 dockets, were filed with the Commission. The great majority of the claims were land claims. Settlements awards were paid out of the U.S. Treasury. The Colville tribes filed a number of claims with the Indian Claims Commission within the 5-year window—on July 31, August 1, and August 8, 1951. Their fisheries claim and water power values claim became part of Indian Claims Commission Docket No. 181, which was originally filed on July 31, 1951. The original petition for Docket No. 181 included broad language seeking damages for unlawful trespass on reservation lands and for compensation or other benefits from the use of the tribes’ land and other property. The tribes’ original petition did not specifically mention the Grand Coulee Dam. In 1956, Docket No. 181 was divided into four separate claims. The tribes’ fisheries claim became part of Docket No. 181-C. In November 1976, over 25 years after the original filing of Docket No. 181, the Indian Claims Commission allowed the Colville tribes to file an amended petition seeking just and equitable compensation for the water power values of certain riverbed and upstream lands that had been taken by the United States as part of the Grand Coulee Dam development. This amended water power value claim was designated as Docket No. 181-D, and it was settled in 1994 by Public Law 103-436. The Spokane tribe filed one claim with the Indian Claims Commission, Docket No. 331, on August 10, 1951, just days before the August 13, 1951, deadline. The claim sought additional compensation for land ceded to the United States by an agreement of March 18, 1887. Furthermore, the Spokane tribe asserted a general accounting claim. These two claims were separated into Docket No. 331 for the land claim and Docket No. 331-A for the accounting claim. Both claims were jointly settled in 1967 for $6.7 million. That is, the Spokane tribe settled all of its claims before the Indian Claims Commission almost 10 years before the Colville tribes were allowed to amend their claim to include a water power values claim. In doing so, the Spokane tribe missed its opportunity to make a legal claim with the Indian Claims Commission for its water power values as well as its fisheries. At that time, the Spokane tribe, as well as the Colville tribes, were pursuing other avenues for compensation of water power values. The Bonneville Power Administration was formed in 1937 to market electric power produced by the Bonneville Dam.Bonneville’s marketing responsibilities have expanded since then to include power from 31 federally owned hydroelectric projects, including the Grand Coulee Dam. Under the Pacific Northwest Electric Power Planning and Conservation Act of 1980 (Northwest Power Act), Bonneville is responsible for providing the Pacific Northwest with an adequate, efficient, economical, and reliable power supply.Bonneville currently provides about 45 percent of all electric power consumed in Idaho, Montana, Oregon, and Washington and owns about 75 percent of the region’s transmission lines. A settlement requiring Bonneville to pay the Spokane tribe would add to its costs of operation, and it therefore would probably pass these costs to Bonneville’s customers in the form of higher rates for power. Bonneville is a self-financing agency, which means that it must cover its costs through the revenue generated by selling power and transmission services. Bonneville typically sets its rates for 5-year periods in order to generate enough revenue to cover the costs of operating the federal power system and to make its debt payments. Assuming that the settlement with the Spokane tribe is similar in nature to the settlement with the Colville tribe in 1994, the impact on Bonneville’s rates would be small. Under the settlement with the Colville tribe, Bonneville has made annual payments since 1996 that have ranged from about $14 million to $21 million. Currently, Bonneville estimates that it will pay about $17 million per year over the next 5 years. In its negotiations with Bonneville, the Spokane tribe has asked for about 40 percent of the Colville tribe’s settlement, which would amount to about $7 million annually from Bonneville. Bonneville uses a rule of thumb to determine rate increases: between $40 million and $50 million in additional annual costs will lead to a rate increase of 1/10th of a cent per kilowatt hour (kWh). Using this rule, we estimate that a settlement with Spokane that is equivalent to 40 percent of the Colville settlement would lead to an increase in rates of less than 20 cents per month per household for a typical household relying solely on power from Bonneville, or a 0.5 percent increase in rates over current levels. Although the magnitude of the rate increase necessary to fund a settlement with the Spokane tribe would be small, it comes at a time when Bonneville’s customers have recently faced large rate increases. From 2000 through early 2003, Bonneville experienced a substantial deterioration in its financial condition because of rising costs and lower-than-projected revenues. As a result, Bonneville’s cash reserves of $811 million at the end of fiscal year 2000 had fallen to $188 million by the end of fiscal year 2002. To cope with its financial difficulties, Bonneville raised its power rates for 2002 by more than 40 percent over 2001 levels. On October 1, 2003, Bonneville raised its rates a further 2.2 percent. Despite Bonneville’s current financial difficulties, Bonneville predicts the conditions that led to the financial problems—namely, consecutive years of low water conditions, extreme market price volatility, and long-term contracts Bonneville signed to buy power from other suppliers at a high cost, which are due to expire in 2006—will abate. Therefore, because the bulk of Bonneville’s obligations in any settlement similar to the Colville settlement will occur in the future, Bonneville’s current financial difficulties should not unduly influence current discussions about how to compensate the Spokane tribe. A reasonable case can be made for having Bonneville and the U.S. Treasury allocate any costs for the Spokane tribe’s claims along the lines agreed to for the Colville tribes. Any settlement would attempt to re-institute a commitment the federal government made to the tribes in the 1930s. Under the Federal Water Power Act of 1920, licenses for the development of privately owned hydropower projects should include a “reasonable annual charge” for the use of Indian lands.Originally, the Grand Coulee site was licensed, and the Spokane tribe expected to receive annual payments for its lands used for the project. However, the license was cancelled when the federal government took over the project (federalized the project). Since the federal government is not subject to the Federal Water Power Act, it was not required to make annual payments to the tribes. Nevertheless, the federal government made a commitment in the 1930s to make annual payments to the Colville and Spokane tribes as if the project had remained a nonfederal project. However, the federal government did not follow through on this commitment after the project was completed and started generating revenues from electricity sales in the 1940s. In pursuing this matter, the tribes weathered various administrations and changes in the federal government’s Indian policy. In the 1950s and 1960s, the federal government actively sought to terminate its relationship with a number of tribes, including the Spokane tribe. In the early 1970s, when it became clear that the federal government was not going to make these payments, the Colville tribes were able to amend their claim with the Indian Claims Commission to pursue this matter. After agreeing to the overall legitimacy of the Colville tribes’ claims, the Congress ultimately approved a settlement that primarily required Bonneville to provide annual payments for water power values. This settlement was a compromise to split the costs between Bonneville and the U.S. Treasury. Bonneville is primarily paying the recurring annual payments, and the U.S. Treasury’s Judgment Fund provided the one-time lump sum payment in settlement of the past annual payments—$53 million. The Spokane tribe, however, had already settled its claim years earlier and therefore could not file an amended claim with the commission. Nevertheless, since Bonneville collects the annual revenues for the electricity generated by the dam, it could be argued that Bonneville should make annual payments to the Spokane tribe out of those revenues, as it does for the Colville tribes; the U.S. Treasury would then pay a lump sum to settle any claims for past years. The current House settlement proposal, H.R. 1753, and previous House and Senate settlement proposals introduced in the 106th and 107th Congresses directed the settlement costs to be split between Bonneville and the U.S. Treasury. It could also be argued that the U.S. Treasury should pay the Spokane tribe’s claim, as it does for most claim settlements against the federal government. S. 1438 provides for the settlement of the tribe’s claim from the U.S. Treasury. However, we do not believe a compelling case can be made to have the nation’s taxpayers fully absorb an additional cost of doing business associated with Bonneville’s production of power in one region of the country. In conclusion, since the Spokane tribe missed its opportunity to file claims with the Indian Claims Commission for its fisheries and water power values, it is unlikely that the tribe’s claims and any associated settlement or final resolution will move forward in any meaningful way without some form of congressional intervention. If the Congress is satisfied with the merits of the tribe’s claims, settlement legislation, such as the current House and Senate bills, could be used as a method to resolve the tribe’s claims. A reasonable case can be made for adopting the model established in the Colville settlement to allocate the settlement costs between Bonneville and the U.S. Treasury. Another option would be to enact legislation providing for some form of dispute resolution, such as mediation or binding arbitration. If the Congress has any doubts about the merits of the claim, it could enact legislation to allow the tribe to file its claim in the U.S. Federal Court of Claims. The merits of the claims could then be decided in court. Such an action was discussed in 1994 when the Colville settlement was reached. For further information, please contact Robert A. Robinson on (202) 512- 3841. Individuals making key contributions to this testimony included Jill Berman, Brad Dobbins, Samantha Gross, Jason Holliday, Jeffery Malcolm, Frank Rusco, Rebecca Sandulli, and Carol Herrnstadt Shulman. Because a settlement has not yet been negotiated, we used the terms of the Colville settlement to estimate the potential effect of the Spokane settlement on electricity rates in the Pacific Northwest. Assumptions used in this calculation are designed to provide a conservative (high-end) estimate of the impact of the settlement on Bonneville’s rate payers. For planning purposes, Bonneville estimates that payments to the Colville tribes total $17 million annually. The Spokane tribe is requesting as much as 40 percent of the Colville settlement, or approximately $7 million annually. To estimate the impact of increasing costs on power rates, Bonneville uses a rule of thumb that $40 million to $50 million in increased costs over a year necessitate a rate increase of approximately $0.001 per kilowatt-hour (kWh). Using this rule of thumb, a $7 million per year cost increase would raise Bonneville’s wholesale power rates by approximately $0.00016 per kWh. According to the Oregon Department of Energy, the average household in Oregon uses approximately 1,000 kWh of electricity per month. An average household in Washington uses 1,170 kWh of electricity per month, according to the Washington Utilities and Transportation Commission. Using the approximate rate increase calculated above, the electricity bills for average households in Oregon and Washington would increase approximately 16 cents and 19 cents, respectively. These calculations assume that the household receives all its electricity from Bonneville and that its retail utility passes through the wholesale rate increase. The impact on the region as a whole would be smaller because Bonneville provides only about 45 percent of the region’s power. Our calculations also assume that Bonneville would not be permitted to deduct any portion of its payment to the Spokane tribe from its debt payment to the U.S. Treasury. Public Law 103-436 enables Bonneville to deduct a portion of its annual payment to the Colville tribes as an interest credit on its Treasury debt payments. If a similar provision were included for any payments for the Spokane tribe, the impact on ratepayers would be reduced. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Grand Coulee Dam was constructed on the Columbia River in northeastern Washington State from 1933 to 1942. The reservoir behind the dam covers land on the Colville Reservation along the Columbia River and land on the adjacent Spokane Reservation along both the Columbia and Spokane rivers. Under a 1940 act, the federal government paid $63,000 and $4,700 to the Colville and Spokane tribes, respectively, for the land used for the dam and reservoir. Subsequently, the Colville tribes pursued additional claims for their lost fisheries and for "water power values" and in 1994 were awarded a lump sum payment of $53 million and, beginning in 1996, annual payments that have ranged between $14 million to $21 million. The Spokane tribe is currently pursuing similar claims. S. 1438, introduced in July 2003, is a proposed legislative settlement for the Spokane tribe's claims. While settlement proposals introduced in the 106th and 107th Congresses directed the settlement costs to be split between Bonneville and the Treasury, S. 1438 provides that the settlement be paid entirely from the Treasury. This statement for the record addresses the (1) impact of a settlement on Bonneville if the costs were split between Bonneville and the Treasury and (2) possible allocation of these costs between Bonneville and the Treasury. A settlement with the Spokane tribe along the lines provided to the Colville tribes would likely necessitate a small increase in Bonneville's rates for power. While the rate increase would amount to less than 20 cents per month per household, it comes at a time when (1) Bonneville's customers have already absorbed rate increases, including those announced on October 1, 2003, of over 40 percent and (2) the economy of the northwestern region, Bonneville's primary service area, is experiencing difficulties. However, the bulk of Bonneville's obligations in any settlement similar to the Colville settlement will occur in the future, when the conditions causing Bonneville's current financial difficulties--such as costly long-term contracts to purchase power from other suppliers--will probably have abated. Therefore, Bonneville's current financial difficulties should not unduly influence current discussions about how to compensate the Spokane tribe. A reasonable case can be made to settle the Spokane tribe's case along the lines of the Colville settlement--a one-time payment from the U.S. Treasury for past lost payments for water power values and annual payments primarily from Bonneville. Bonneville continues to earn revenues from the Spokane Reservation lands used to generate hydropower. However, unlike the Colville tribes, the Spokane tribe does not benefit from these revenues. Spokane does not benefit because it missed its filing opportunity before the Indian Claims Commission. At that time, it was pursuing other avenues to win payments for the value of its land for hydropower. These efforts would ultimately fail. Without congressional action, it seems unlikely that a settlement for the Spokane tribe will occur.
|
A clinical lab is generally defined as a facility that examines specimens derived from humans for the purpose of disease diagnosis, prevention, and treatment, or health assessment of individuals. Labs conduct a wide range of tests that are categorized as waived tests or as moderate- or high- complexity tests. Approximately 81 percent of all labs (about 157,000) are not subject to routine biennial surveys because they perform (1) “waived” tests, which are generally simple tests that have an insignificant risk of erroneous results, such as those approved for home use or (2) tests performed during the course of a patient visit with a microscope on specimens that are not easily transportable. CLIA establishes more stringent requirements for the 19 percent (about 36,000) of labs performing moderate- or high-complexity testing, including the requirement for a survey and participation in routine proficiency testing. Surveys examine lab compliance with CLIA program requirements in several areas including: personnel qualifications, proficiency testing, quality control, quality assurance, and recordkeeping. In general, labs have a choice of who conducts their surveys—state survey agencies using CLIA inspection requirements or other survey organizations that use requirements CMS has determined to be at least equivalent to CLIA’s. CMS contracts with state survey agencies in most states to inspect labs against CLIA requirements. CLIA established an approval process to allow states and private accrediting organizations to use their own requirements to survey labs. As noted earlier, New York and Washington operate CLIA-exempt programs and CMS has approved six private, nonprofit accrediting organizations to survey labs—the American Association of Blood Banks (AABB), the American Osteopathic Association (AOA), the American Society of Histocompatibility and Immunogenetics (ASHI), CAP, COLA, and JCAHO. The requirements of both state CLIA-exempt programs and accrediting organizations must be reviewed by CMS at least every 6 years to ensure CLIA equivalency, but may be more stringent than those of CLIA. Figure 1 lists the three types of survey organizations and indicates whether they survey labs under CLIA requirements, or use their own CLIA-equivalent requirements. It also shows the percentage of labs performing moderate- to high-complexity testing surveyed by each type of organization. In general, state survey agencies, COLA, and Washington’s CLIA-exempt program survey physician office labs, while New York’s CLIA-exempt program, CAP, and JCAHO survey hospital labs. Survey organizations (1) conduct surveys and complaint investigations and (2) monitor proficiency test results submitted by surveyed labs three times a year. Surveys are typically conducted by former or current lab workers, who assess lab compliance with CLIA or CLIA-equivalent requirements. Generally, surveyors verify that lab personnel are appropriately qualified to conduct testing, evaluate proficiency test records, check equipment and calibration to ensure that appropriate quality control measures are in place, and determine whether the lab has a quality assurance plan and uses it to, among other things, appropriately identify and resolve problems affecting testing quality. Surveys also include an educational component to assist labs in understanding how to comply with CLIA requirements. Lab survey requirements are classified as either “standard-” or “condition-” level. Deficiencies are also characterized as standard- or condition-level based on the requirement in which the deficiency occurs. Standard-level deficiencies denote problems that generally are not serious, while condition-level deficiencies are cited when the problems are serious or systemic in nature. When deficiencies are found during surveys or complaint investigations, labs are required to submit a plan of correction, detailing how and when they will address the deficiencies. Additionally, CMS can impose principal or alternative sanctions, or both. Principal sanctions include revocation of a CLIA certificate, cancellation of the right to receive Medicare payments, or limits on testing. Alternative sanctions, authorized by Congress to give CMS more flexibility to achieve lab compliance, are less severe and include civil money penalties or on-site monitoring. For condition-level deficiencies that do not involve an imminent and serious threat to patient health and a significant hazard to public health, labs have an opportunity to correct the deficiencies, which we refer to as a grace period, before the sanctions are imposed. If a lab is unable to correct a deficiency during this grace period, CMS determines whether to impose sanctions. CMS, including its 10 regional offices, oversees state and accrediting organization survey activities. CMS reviews and approves initial and subsequent applications from exempt-state programs and accrediting organizations to ensure CLIA equivalency. Validation reviews are one of CMS’s primary oversight tools. Federal surveyors in CMS regional offices are responsible for conducting validation reviews of state survey agency and exempt-state program inspections, but state survey agency staff conduct the validation reviews of accrediting organization inspections. An objective of these reviews is to determine if all condition-level deficiencies were identified. These reviews are conducted within 60 days of a state’s, or 90 days of an accrediting organization’s, survey of a lab. The extent of serious quality problems at labs is unclear because CMS has incomplete data on condition-level deficiencies identified by state survey agencies prior to 2004. Survey results for 2004 show substantial variability across states, which suggests that state survey agencies do not conduct surveys in a consistent manner. We also found that the lack of a straightforward linkage between CLIA requirements and the CLIA- equivalent requirements of some survey organizations makes it virtually impossible to assess lab quality in a standardized manner. CMS does not effectively use available data, such as the results of surveys and proficiency testing, to monitor and assess lab quality. Although CMS noted that proficiency testing trend data show a decrease in failures for labs as a whole, the data suggest that quality may not have improved at hospital labs for the period 1999 through 2003. CMS’s OSCAR database contains limited data on the quality of labs inspected by state survey agencies and, as a result, it is not possible to analyze changes in the quality of lab testing over time. In January 2004, CMS implemented revised CLIA survey requirements and modified the existing OSCAR data—state survey agency findings—to reflect the changes. The revisions affected approximately two-thirds of the CLIA condition-level requirements. As a result of the data modifications, the findings for surveys conducted prior to 2004 no longer reflect all key condition-level requirements in effect at the time of those surveys. Based on the available 2004 OSCAR data (which represent about one half of all labs surveyed by state survey agencies), we found that 6.3 percent of labs had condition-level deficiencies. However, variability in the OSCAR data suggests that labs are not surveyed in a consistent manner. In 2004, the percentage of labs that were reported to have condition-level deficiencies varied considerably by state, ranging from none in 6 states to about 25 percent of labs in South Carolina. Based on interviews with CMS and 10 state survey agencies, it appears that at least some of this variability is due to differences in states’ approaches to conducting their surveys as opposed to true differences in lab quality. For example, CMS told us that, because there is not a prescriptive checklist to guide the survey process, the reliance on state surveyor judgment results in variations in the citing of deficiencies. In fact, officials in several states said that there are circumstances under which condition-level deficiencies would not be cited, such as if the lab staff were new or if the lab had a good history of compliance. As a result, available data likely understate the extent of serious quality problems at labs. Differences in the inspection requirements used by survey organizations make it virtually impossible to measure lab quality in a standardized manner. Because exempt-state programs and accrediting organizations do not classify inspection requirements and related deficiencies with the same criteria used by state survey agencies—as either standard- or condition-level—they cannot easily identify the proportion of surveyed labs with condition-level deficiencies. We asked exempt-state programs and accrediting organizations what percentage of their requirements, and any deficiencies cited for failure to meet those requirements, indicated serious problems that were equivalent to CLIA condition-level deficiencies. CAP and COLA crosswalked their recent survey findings to CLIA condition-level requirements. Although their analysis suggested that from about 56 to 68 percent of labs surveyed during 2004 had a deficiency in at least one condition-level requirement, they acknowledged that these proportions overstated the subset of labs with serious problems. JCAHO did not crosswalk its inspection requirements to those of CLIA because staff would have had to manually review each survey report to determine which deficiencies were equivalent to deficiencies in CLIA condition-level requirements. Despite the difficulty of identifying CLIA equivalent condition-level deficiencies, two of the three accrediting organizations we reviewed have systems to identify labs they survey that have serious quality problems. COLA estimated that about 9 percent of labs it surveyed in 2004 were subject to closer scrutiny because of the seriousness of the problems identified. According to JCAHO, about 5 percent of the labs it surveyed in 2004 were not in compliance with a significant number of requirements. The third accrediting organization, CAP, has criteria for identifying labs that warrant greater scrutiny, but CAP officials told us that identifying such labs had to be accomplished on a case by case basis, rather than through a database inquiry. CMS does not effectively use available data, such as the results of surveys and proficiency testing data, to monitor and assess lab quality. Although CMS tracks the most frequently cited deficiencies at labs in an effort to improve quality, it does not routinely track the proportion of labs, by state, in which state survey agencies identify condition-level deficiencies—those that denote serious or systemic problems. As noted earlier, variability in survey findings suggests inconsistencies in how surveys are conducted. CMS also does not require exempt-state programs and accrediting organizations to routinely submit data on serious deficiencies identified at the labs they inspect, unless the deficiencies pose immediate jeopardy to the public or an individual’s health. We also found that CMS does not effectively use proficiency testing data to assess clinical lab quality. Proficiency testing is an important indicator of lab quality because it is an objective assessment of a lab’s ability to produce accurate test results and is conducted more frequently than surveys—three times a year versus once every 2 years. In the absence of comparable survey data, proficiency testing results provide a uniform way to assess the quality of lab testing across survey organizations. Although CMS’s analysis of proficiency testing data showed improvements over time, our analysis of proficiency testing data for 1999 through 2003 suggests that there has been an increase in proficiency testing failures for labs inspected by CAP and JCAHO, which generally inspect hospital labs, and a decrease in such failures for labs surveyed by state survey agencies and COLA, which tend to inspect physician office labs. Importantly, CMS’s decision to require proficiency testing for almost all laboratory tests only three times a year is inconsistent with the statutory requirement. CLIA requires that proficiency testing be conducted “on a quarterly basis, except where the Secretary determines for technical and scientific reasons that a particular examination or procedure may be tested less frequently (but not less often than twice per year).” In CMS’s 1992 rule implementing CLIA, the agency provided a rationale for reducing the frequency of proficiency testing, but did not provide a technical and scientific basis for reducing the frequency for particular procedures or tests. CMS told us that officials from CMS and the Centers for Disease Control and Prevention had together determined that the reduced frequency was based on technical and scientific grounds and supplied a brief, undated narrative which it attributed to the Centers for Disease Control and Prevention. However, the narrative focused on the relative costs and benefits of proficiency testing at various intervals and did not include an analysis of the technical and scientific considerations with regard to particular tests that presented a basis for reducing the frequency. Oversight by CMS and survey organizations is not adequate to ensure that labs meet CLIA requirements. For example, the goal of educating lab workers during surveys takes precedence over the identification and reporting of deficiencies, while the use of volunteer rather than staff surveyors by one accrediting organization raises questions about appropriate levels of training and the appearance of a conflict of interest. The significant increase in complaints since CAP took steps to help ensure that lab workers know how to file a complaint suggests that some quality problems at labs inspected by some survey organizations may not be reported. In addition, sanctions are not being used effectively as an enforcement tool to promote labs’ compliance with CLIA requirements, as evidenced by the relatively few labs with repeat condition-level deficiencies on consecutive surveys from 1998 through 2004 that had sanctions imposed. Furthermore, CMS is not meeting its responsibility to determine that accrediting organization and exempt-state program requirements and processes continue to be at least equivalent to CLIA’s. Finally, ongoing CMS validation reviews do not provide an independent assessment of the extent to which surveys identify all condition-level deficiencies—primarily due to their timing. The goal of educating lab workers sometimes takes precedence over, or precludes, the identification and reporting of deficiencies that affect the quality of lab testing. For example, surveyors from one state survey agency told us they do not cite condition-level deficiencies when lab workers are new but prefer to educate the new staff. As a result, data on the quality of lab testing and trends in quality over time may be misleading. CMS also appears to be inappropriately stressing education over regulation. For instance, in its 2005 implementation of proficiency testing for lab technicians who interpret Pap smears, a test for cervical cancer, CMS instructed state surveyors to refrain from citing deficiencies at labs whose staff fail the tests in 2005 or 2006. According to CMS, this educational focus allows labs and their staff to become familiar with the proficiency testing program; however, it is important to note that there was about a 13-year time lag between the 1992 regulations that implemented CLIA and the 2005 implementation of Pap smear proficiency testing. In addition, CMS noted that it was concerned about some of the high initial Pap smear proficiency testing failure rates. An inappropriate balance between the educational and regulatory roles is also evident in some accrediting organization practices. For instance, for COLA, the process of educating labs begins even prior to a survey, when labs are encouraged to complete a self-assessment to identify COLA requirements with which they are not in compliance. A CAP surveyor we interviewed with over 30 years of lab experience estimated that the majority of pathologists—individuals who generally serve as CAP survey team leaders—view surveys as educational, rather than as assessments of compliance with lab requirements. The use of volunteer inspectors by CAP raises concerns about appropriate levels of training and the appearance of a conflict of interest. Although state survey agencies, exempt-state programs, COLA, and JCAHO employ dedicated staff surveyors, CAP relies primarily on volunteer teams consisting of lab workers from other CAP-inspected labs to conduct surveys. In contrast to the mandatory training and continuing education programs in place for the staff surveyors of other survey organizations, training for CAP’s volunteer surveyors is currently optional. According to data provided by CAP, two-thirds of volunteer surveyors who had recently participated in a survey had no formal training in the 3 to 5 years preceding the survey. While full-time surveyors employed by other survey organizations conduct from 30 to about 200 surveys per year, CAP volunteer surveyors have much less experience conducting surveys because they only survey about one lab each year. CAP officials told us they plan to establish a mandatory training program for survey team leaders beginning in mid-2006. However, the required training will take only 1 or 2 days. In contrast, state survey agency inspectors must complete 5 days of basic training, while COLA staff inspectors participate in a 5-week orientation program and an annual 20 hours of continuing education. CAP’s method for staffing survey teams also raises concerns about the appearance of a conflict of interest. Typically, inspection team leaders are pathologists who direct other labs in the community, and the inspection team is comprised of several employees from the team leader’s lab. In the event of differing opinions about survey findings, team members who are subordinates to the team leader may feel that they have no other recourse than to follow the team leader’s instructions—such as downgrading the record of an inspection finding to a less serious category. Recognizing that team members’ objectivity may be compromised in this situation, CAP’s revised conflict of interest policy instructs all parties to be cautious to retain objectivity in fact finding throughout the inspection process. Some lab workers may not be filing complaints about quality problems at their labs because of anonymity concerns or because they may not be familiar with filing procedures. Based on OSCAR data and data obtained from exempt-state programs and accrediting organizations for 2002 through 2004, few complaints were received about lab testing relative to the number of labs—significantly less than one complaint per lab per year. We found that lab workers may not know how to file a complaint. CAP experienced a significant increase in the number of complaints it received since October 2004 when it began requiring CAP-inspected labs to display posters on how to file complaints. Specifically, from October through December 2004, CAP received an average of 22 complaints per month, compared to an average of 11 complaints per month in the 9 months preceding the poster requirement. Because of the difficulty of protecting the anonymity of lab workers who file complaints, whistle-blower protections for such individuals are particularly important. Two of the three accrediting organizations we interviewed have whistle-blower protections—CAP and JCAHO. While officials from New York and Washington’s exempt-state programs told us that whistle-blower laws in their states provide some protection for lab workers who file complaints, officials in most of the other 10 states we interviewed told us that they did not have any whistle-blower protections or were unable to identify specific protections that applied to lab workers in their state. Although there are no federal whistle-blower protections specifically for workers in labs covered by CLIA, legislation was introduced in 2005 to provide such protections. Few labs were sanctioned by CMS from 1998 through 2004—even those with the same condition-level deficiencies on consecutive surveys— because many proposed sanctions are never imposed. Our analysis of CMS enforcement data from 1998 through 2004 found that while over 9,000 labs had sanctions proposed during these years, only 501 labs were sanctioned. This equates to less than 3 percent of the approximately 19,700 labs inspected by state survey agencies. Before sanctions go into effect, labs are given a grace period to correct condition-level deficiencies, unless the deficiencies involve an imminent and serious threat to patient health and a significant hazard to public health. Most labs correct the deficiencies within the grace period. CMS officials told us that it was appropriate to give labs an opportunity to correct such deficiencies within a prescribed time frame and thus avoid sanctions. However, the number of labs with the same repeat condition-level deficiencies from one survey to the next also raises questions about the overall effectiveness of the CLIA enforcement process. From 1998 through 2004, 274 labs surveyed by state survey agencies had the same condition- level deficiency cited on consecutive surveys and 24 of these labs had the same condition-level deficiency cited on more than two surveys. This analysis may understate the percentage of labs with repeat condition-level deficiencies because OSCAR data prior to 2004 no longer reflect about two-thirds of condition-level requirements and associated deficiencies at the time of those surveys. We found that only 30 of the 274 labs with repeat condition-level deficiencies had sanctions imposed—either principal, alternative, or both. With respect to accredited labs, from 1998 through 2004, less than 1 percent of accredited labs (81) lost their accreditation; few of these labs were subsequently sanctioned by CMS and many still participate in the CLIA program. Moreover, CMS did not sanction 3 labs that COLA concluded had cheated on proficiency testing by referring the samples to another lab to be tested. By statute, the intentional referral of samples to another lab for proficiency testing is a serious deficiency that should result in automatic revocation of a lab’s CLIA certificate for at least 1 year. Based on our interviews, we found that the 3 labs were allowed to continue testing because they had initiated corrective actions; in effect, these labs were given an opportunity to correct a deficiency that appears to have required a loss of their CLIA certificate for at least 1 year. We found that CMS has been late in determining that exempt states’ and accrediting organizations’ inspection requirements and processes are at least equivalent to CLIA’s. Because CMS has not completed its equivalency reviews within required time frames, accrediting organizations and exempt state programs have continued to operate without proper approval. Equivalency reviews for CAP, COLA, JCAHO, and Washington due to be completed between November 1, 1997, and April 30, 2001, were an average of about 40 months late. In August 1995, CMS determined that New York’s next equivalency review should be completed by June 30, 2001, but was over 4 years past due as of December 2005. Similarly, COLA’s equivalency review was about 3 years past due. Furthermore, although federal regulations require CMS to review equivalency when an accrediting organization or exempt-state program adopts new requirements, CMS has not reviewed changes in the inspection requirements prior to use by these entities. As a result, such survey organizations may introduce changes that are inconsistent with CLIA requirements. For example, JCAHO made a significant change to its inspection requirements in January 2004; CMS did not begin an in-depth review of JCAHO’s revised requirements until early 2005—over a year after they were implemented by JCAHO. According to CMS, its review has identified several critical areas where JCAHO standards are less stringent than those of CLIA. JCAHO acknowledged the need to make some adjustments to its revised requirements. CMS officials attributed delays in making equivalency determinations and reviewing interim changes to having too few staff. The CLIA program, located in CMS’s Center for Medicaid and State Operations (CMSO), currently has approximately 21 full-time-equivalent positions compared to a peak of 29 such positions several years ago. As required by statute, the CLIA program is funded by lab fees and since its inception the program’s fees have exceeded expenses. As of September 30, 2005, the CLIA program had a carryover balance of about $70 million—far more than required to hire an additional six to seven staff members. However, CMS officials told us that because the CLIA program staff are part of CMSO, they are subject to the personnel limits established for CMSO, regardless of whether or not the program has sufficient funds to hire more staff. CMS validation reviews that are intended to evaluate lab surveys conducted by both states and accrediting organizations do not provide CMS with an independent assessment of the extent to which surveys identify all serious—that is, condition-level or condition-level equivalent— deficiencies. CMS requires its regional offices to conduct validation reviews of 1 percent of labs inspected by state survey agencies in a year. However, CMS does not specifically require that validations occur in each state. As a result, from 1999 through 2003, there were 11 states in which no validation reviews were conducted in multiple years. Without validating at least some surveys in each state, CMS is unable to determine if the states are appropriately identifying deficiencies. Many validation reviews occur at the same time a survey organization conducts its inspection and, in our view, the collaboration among the two teams during these simultaneous surveys prevents an independent evaluation. Seventy-five percent of validations of state lab surveys were conducted simultaneously from fiscal years 1999 through 2003. According to CMS officials, the large proportion of simultaneous validation reviews provides an opportunity for federal surveyors to share information with state surveyors, monitor their conformance with CLIA inspection requirements, and identify training and technical assistance needs. However, we found that such reviews do not provide an accurate assessment of state surveyors’ ability to identify condition-level deficiencies. Of the 13 validation reviews that identified missed condition- level deficiencies, only 1 was a simultaneous review. Regarding validation reviews of accrediting organization’s survey of labs, CMS officials were unable to tell us how many of the roughly 275 validation reviews conducted each year from fiscal year 1999 through fiscal year 2003 were simultaneous. However, JCAHO estimated that 33 percent of its validation reviews were conducted simultaneously. CMS officials told us that the agency’s intent in instituting simultaneous reviews was for state and accrediting organization surveyors to share best practices, to promote understanding of each other’s programs, and to foster accrediting organization improvement. In contrast, most of the state survey agency officials we interviewed told us that simultaneous validation reviews do not provide a realistic evaluation of the adequacy of accrediting organizations’ inspection processes. Clinical labs play a pivotal role in the nation’s health care system by diagnosing many diseases, including potentially life-threatening diseases, so that individuals receive appropriate medical care. Given this important role, lab tests must be accurate and reliable. Our work demonstrated that the oversight of clinical labs needs to be strengthened in several areas. Without standardized survey findings across all survey organizations, CMS cannot tell whether the quality of lab testing has improved or worsened over time or whether deficiencies are being appropriately identified. Using data to analyze activities across survey organizations can be a powerful tool in improving CMS oversight of the CLIA program, yet CMS has not taken the lead in ensuring the availability and use of data from survey organizations to help it monitor their performance. Furthermore, the agency is not requiring that labs participate in proficiency testing on a quarterly basis, as required by CLIA. More broadly, CMS and survey organization oversight of the lab survey process is not adequate to enforce CLIA requirements. Educating labs to ensure high-quality testing should complement but not replace the enforcement of CLIA inspection requirements. Labs with the same serious deficiencies on consecutive surveys often escape sanctions, even though Congress authorized alternative sanctions to give CMS more flexibility to achieve lab compliance. Without the threat of real consequences, labs may not be sufficiently motivated to comply with CLIA inspection requirements. By allowing validation reviews to occur simultaneously with surveys and permitting some states to go without validation reviews over a period of several years, CMS is not making full use of this oversight tool. Moreover, independent validation reviews of accrediting organization surveys are critical because CMS has not conducted equivalency reviews within the time frames it established. The recommendations we have made would help CMS to consistently identify and address lab quality problems. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members of the Subcommittee may have. For further information regarding this statement, please contact Leslie G. Aronovitz at (312) 220-7600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found in the last page of this statement. Walter Ochinko, Assistant Director; Jenny Grover; Kevin Milne; and Michelle Rosenberg contributed to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Today's hearing focuses on oversight of clinical labs. The Clinical Laboratory Improvement Amendments of 1988 (CLIA) strengthened quality requirements for labs that perform tests to diagnose or treat disease. About 36,000 labs that perform certain complex tests must be surveyed biennially by a state survey agency, a state CLIA-exempt program, or a private accrediting organization. CMS oversees implementation of CLIA requirements, which includes determining the CLIA equivalency of the inspection requirements used by exempt states and accrediting organizations. GAO was asked to discuss (1) the quality of lab testing and (2) the adequacy of CLIA oversight. To examine these issues, GAO analyzed data on lab performance and reviewed the procedures used by CMS and survey organizations to implement CLIA and oversee lab performance. This testimony is based on the GAO report, Clinical Lab Quality: CMS and Survey Organization Oversight Should Be Strengthened, GAO-06-416 (June 16, 2006). In summary, insufficient data exist to identify the extent of serious quality problems at labs. When CMS implemented revised CLIA survey requirements in 2004, it modified historical state survey agency findings and, as a result, data prior to 2004 no longer reflect key survey requirements in effect at the time of those surveys. The limited data available suggest that state survey agency inspections do not identify all serious deficiencies. In addition, the lack of a straightforward method to link similar requirements across survey organizations makes it virtually impossible to assess lab quality in a standardized manner. Furthermore, CMS does not effectively use available data, such as the proportion of labs with serious deficiencies or proficiency testing results, to monitor lab quality. Proficiency testing is an objective measurement of a lab's ability to consistently produce accurate test results. GAO's analysis of proficiency testing data suggests that lab quality may not have improved at hospital labs in recent years. Oversight of clinical lab quality is not adequate to ensure that labs are meeting CLIA requirements. Weaknesses in five areas mask real and potential quality problems at labs. First, the balance struck between the CLIA program's educational and regulatory goals is sometimes inappropriately skewed toward education, which may result in understatement of survey findings. For example, even though the initial test failure rates were high, CMS instructed state survey agencies not to cite deficiencies during the first two years of required Pap smear proficiency testing to allow labs and their staff to become familiar with the program. Second, the manner in which one accrediting organization structures its survey teams raised concerns about appropriate levels of training and the appearance of a conflict of interest that could undermine the integrity of the survey process. Third, concerns about anonymity and lab workers' lack of familiarity with how to file a complaint suggests that some quality problems are not being reported. Fourth, based on the large number of labs with proposed sanctions from 1998 through 2004 that were never imposed--even for labs with the same serious deficiencies on consecutive surveys--it is unclear how effective CMS's enforcement process is at motivating labs to consistently comply with CLIA requirements. Finally, CMS is not meeting its requirement to determine in a timely manner the continued equivalency of accrediting organization and exempt-state program inspection requirements and processes, nor has the agency reviewed changes to accrediting organization and exempt-state program inspection requirements before implementation.
|
BSE and vCJD belong to a family of diseases known as transmissible spongiform encephalopathies (TSE). Other TSEs include scrapie in sheep and goats, chronic wasting disease in deer and elk, feline spongiform encephalopathy in domestic cats, and mink encephalopathy. Currently, no therapies or vaccines exist to treat TSEs and a definitive diagnosis can only be made from a post-mortem examination of the brain. The infective agent that gives rise to TSEs is generally thought to be a malformed type of protein, called a prion, which causes normal molecules of the same type of protein in the brain to become malformed and eventually results in death. Prions are neither viruses nor bacteria and contain no genetic material—no deoxyribonucleic acid (DNA). Prions cannot be readily destroyed by conventional heat, irradiation, chemical disinfection, or sterilization procedures. TSE prions have been found to accumulate in central nervous system tissue—specifically the brain, spinal cord, and eye—and have been found in other body tissues, such as the tonsils and small intestines, of animals and humans. For BSE, the precise amount of infective material needed to cause disease is unknown, but research suggests that it is very small. According to scientific experts in the European Commission, in careful feeding experiments, less than 1 gram of infected brain tissue induced disease in all the recipient cattle. The original source of BSE is not known with certainty. However, based on available evidence, experts generally agree that the practice of recycling the remains of diseased animals, specifically scrapie-infected sheep, into feed for livestock, including cattle, was responsible for the emergence and spread of BSE in the United Kingdom. In 1986, BSE was first identified in the United Kingdom; and in 1988, that government banned the practice of feeding ruminant-derived protein to ruminants to thwart its spread. The number of new cases of BSE has declined from a high in 1992 of 37,316 to a total of 764 new cases in 2004. BSE has been found in about 189,000 animals worldwide, most of which (about 184,000) were discovered in the United Kingdom. The remaining cases were discovered in 26 countries, including Canada and the United States. Three nations—the United States, Oman, and the Falkland Islands—have only detected the disease in imported animals. The following are the number of reported cases, by region and/or country: Europe. United Kingdom—184,045; the rest of Europe—5,107; North America. Canada—4; United States—1; Asia-Pacific. Japan—14; and South America. Falkland Islands—1. In 1996, the United Kingdom reported the first case of the human disease, vCJD. Scientists believe vCJD is linked to exposure to the BSE prion, most likely through consuming beef and beef products infected with BSE. While scientists and regulatory officials believe that millions of people in the United Kingdom may have ingested BSE-infected tissue, many also believe vCJD is difficult to contract. As of December 1, 2003, 153 cases of vCJD had been reported worldwide, with 143 of these cases in the United Kingdom. The Department of Health and Human Services’ Centers for Disease Control and Prevention, which is responsible for surveillance of vCJD, reported that almost all of the vCJD victims had multiple-year exposures in the United Kingdom during the height of the outbreak of BSE-infected cattle—between 1980 and 1996. Most vCJD victims have been young—the average age at death was 28—and half died within 13 months from the time they first showed symptoms. The first indigenous case of BSE in North America was discovered in Canada in May 2003. (Canada’s first infected cow, discovered in 1993, had been imported from the United Kingdom.) A Canadian government investigation concluded that the infected cow discovered in 2003 most likely contracted the disease by consuming feed containing BSE-contaminated ruminant material, probably before Canada imposed its feed ban in 1997. Canadian authorities believe that BSE entered the feed chain through slaughtered and rendered cattle imported from the United Kingdom. In December 2003, an animal infected with BSE was discovered in the United States. According to U.S. authorities, that animal—a dairy cow in Washington State—had been part of a herd of 81 cattle imported from Canada in September 2001. Appendix III describes FDA’s and USDA’s actions in response to the 2003 discoveries. In January 2005, Canada discovered two more cases of BSE. Following the discovery of the infected cow in the United States, U.S. beef exports dropped precipitously. The United States is currently engaged in discussions with its major trade partners to reestablish beef exports. In October 2004, Japan, previously the largest importer of U.S. beef, agreed in principle to resume imports of certain beef products from cattle slaughtered at 20 months or younger; as of February 11, 2005, the two countries were working out the details of this agreement. To detect potentially prohibited material in feed, FDA uses a test called “feed microscopy,” which is a visual examination of a sample under a microscope for the presence of animal tissue, such as hair and bone particles. According to FDA officials, when performed by an experienced analyst, the species can sometimes be identified. FDA is evaluating a more sensitive test called “polymerase chain reaction” (PCR), which detects animal DNA and can distinguish ruminant DNA. However, feed containing exempt items (e.g., milk and blood proteins) derived from ruminants would test positive for ruminant DNA using PCR. When inspectors find violations of the feed-ban rule, FDA can issue warning letters, and firms may conduct voluntary feed recalls. FDA has the authority to take immediate enforcement action, including seeking a court order to seize feed products that violate the feed ban or obtaining a court-ordered injunction ordering a firm to cease operations. Of the 38 states we surveyed, 37 told us they have authority to take action for violations of the feed ban. FDA directs its districts to issue warning letters within 30 workdays—approximately 45 calendar days after the inspection. Warning letters give firms the opportunity to voluntarily take corrective action before FDA initiates enforcement actions. Under the risk-based priority inspection system that FDA adopted in 2002, FDA and states have focused inspection resources on the following types of firms, which FDA has designated as high-risk for potentially exposing cattle to BSE: renderers that accept dead ruminant animals and/or the waste materials from beef slaughter facilities; feed mills that use prohibited material, which can include FDA-licensed mills that handle certain new animal drugs for use in animal feeds and nonlicensed mills that do not handle such animal drugs; and protein blenders that use prohibited material. Other firms subject to the feed ban include the following: firms that manufacture only pet food; firms that transport or distribute animal feed; firms that salvage animal feed or pet food; and other firms that handle animal feed, including retailers, grocery warehouses, and specialty food companies. In addition to inspections of high-risk firms, FDA asks states to perform a number of inspections at the lower risk firms under their contracts or agreements with FDA. FDA also performs inspections of some lower risk firms. Table 1 shows the number of firms inspected during fiscal year 2004. Our 2002 report found that FDA was not acting promptly to compel firms to keep prohibited materials out of cattle feed and to label animal feed that cannot be fed to cattle; FDA’s data on feed inspections was so severely flawed that FDA did not know the full extent of industry compliance; FDA had no clear enforcement strategy for firms that do not obey the feed ban and did not know what enforcement actions states had taken; and FDA had been using inaccurate, incomplete, and unreliable data to track and oversee feed-ban compliance. A 2001 study by the Harvard Center for Risk Analysis noted that the greatest risk of BSE exposure to cattle in the United States is through mishandling, mislabeling, or contaminating cattle feed. The study developed a simulation model for predicting the number of infected animals that would result from the introduction of BSE into the United States. Using this model, the Harvard study concluded that, if 10 cattle infected with BSE were imported into the United States, only three new cases of BSE would likely occur, on average, and that BSE is virtually certain to be eliminated from the United States within 20 years following its introduction. According to the study, any new cases of BSE would come primarily from industry’s failure to comply with the feed ban. A subsequent 2003 Harvard reassessment—following the discovery of the BSE-infected cow in Canada that year—arrived at a similar conclusion. Since our January 2002 report, FDA has changed the way it collects, tracks, and reports inspection data. In April 2002, FDA implemented a uniform inspection form for federal and state inspectors to document inspection results. Although FDA had an inspection form earlier, inspectors were not always completing the required information, and several states did not use FDA’s form. FDA has also issued feed-ban inspection guidance and appointed BSE coordinators in each of its district offices to review inspection forms for completeness. The district BSE coordinators told us that FDA has trained inspectors on using the inspection form and carrying out inspections. Although most states reported that this training was sufficient, a few told us that they had not received training since the late 1990s or were not able to attend training because of state budget constraints. However, in commenting on a draft of the report, FDA officials said that the agency always offers to provide training to states, when requested. Regarding the data deficiencies we reported in 2002, FDA implemented a newly designed feed-ban database and data entry procedures in its Field Accomplishment and Compliance Tracking System (FACTS) in April 2002. According to our analysis, this new approach and data system are designed to more reliably track feed-ban inspection results. As a result, FDA has a better management tool for overseeing compliance with the feed-ban rule and a data system that better conforms to standard database management practices. Specifically, FDA’s new approach makes the following improvements: All firms have unique identifiers. Inspection records in FDA’s data system—including those that were previously missing unique identifiers—now have them, according to our data reliability analysis. Before the new approach, about 45 percent of FDA’s feed inspection records lacked information to identify individual firms. As a result, the earlier data could not be used to reliably determine the number of firms inspected, compliance trends over time, or the inspection history of an individual firm. These problems should not occur with FDA’s new system. Information is substantially complete and accurate. FDA has corrected information problems we had identified in our 2002 report, according to our data reliability analysis of the inspections conducted since April 15, 2002. The new FACTS database contains edit checks to detect any incomplete or inaccurate data. Furthermore, FDA’s current feed-ban inspection guidance directs district BSE coordinators or their designees to review BSE inspection forms for completeness and accuracy. Previously, headquarters staff had entered the data received from district offices and did not have sufficient knowledge to detect irregularities in the data they were entering. In addition, states that have contracts or agreements with FDA are now using the same inspection forms as FDA. Previously, several states used state-developed forms, which did not always provide comparable information. Data are more timely. Since April 15, 2002, about 95 percent of inspections with serious violations have been entered into the FACTS database within 45 days of the inspection date, according to our analysis. This rate of entry is a significant improvement over the timeliness of entry rates we reported in 2002. At that time, we found that some inspections were entered into FDA’s database 2 or more years after the date of inspection. For such inspections, FDA could not accurately report on firms’ compliance with the feed ban and could not clarify inconsistent or conflicting information, or obtain answers to missing information—situations that FDA’s new approach should help avoid. As a result of these improvements, FDA is able to present more reliable feed ban inspection information on its Web site for the approximately 10,000 firms inspected since April 15, 2002, or about two-thirds of the approximately 14,800 firms inspected since 1997. Appendix II provides a detailed description of actions FDA has taken on the recommendations in our 2002 report. While FDA has made many improvements to its oversight and enforcement of the feed ban in response to our 2002 report recommendations, we found a number of oversight weaknesses that limit the effectiveness of the ban and could place U.S. cattle at risk for BSE. Specifically, we found that FDA does not have a uniform procedure to identify all firms subject to the feed ban, require firms to notify FDA if they process with prohibited material, routinely use tests to verify compliance with the feed ban, alert USDA or states when cattle may have been fed with feed containing prohibited material, and adequately overseeing the procedures for cleaning vehicles that haul cattle feed. Furthermore, we found that cautionary statements are not required on feed or feed ingredients intended for export that contain prohibited materials. In addition, FDA has not been reporting BSE inspection results to Congress and the public in a full and complete context. When the feed ban took effect in 1997, FDA first focused on identifying as many firms as possible that were subject to the ban. As of September 30, 2004, FDA officials had identified approximately 14,800 firms that are subject to the feed ban (see table 2). That is about 4,200 more firms than the 10,576 firms FDA had identified approximately 3 years earlier. FDA officials acknowledge that the agency has not identified all firms subject to the feed-ban rule. FDA has identified firms by reviewing its list of firms that manufacture feed that contains certain new animal drugs; FDA knew about these firms because it requires them to be licensed and because it has certain regulatory responsibility over these firms. a list of the firms or individuals that USDA has identified as violating USDA’s and FDA’s requirements with respect to chemical and drug residues in animals slaughtered for human consumption. lists of firms that states identified. For example, 27 of the 38 states we surveyed register renderers, 28 register protein blenders, and 34 register feed mills that FDA has not licensed, and provide this information to FDA during meetings to set up annual inspection plans. membership lists of industry associations, such as the National Renderers Association. In addition, FDA officials told us that FDA districts have used multiple approaches, including looking through telephone books to identify the names of additional firms. However, FDA has not developed a systematic approach for identifying additional firms subject to the feed ban. For example, FDA does not have an approach for identifying additional nonlicensed feed mills in states that do not provide that information. FDA also acknowledged that it has identified only a small percentage of the thousands of transportation firms that may haul cattle feed. Moreover, in commenting on a draft of this report, FDA told us that there are an estimated 1 million businesses (e.g., dairy farms feedlots, and other facilities) that feed cattle and other animals. FDA also told us that it does not consider farms that mix their own feed or feed cattle as well as other animals as low risk. However, FDA does not have a strategy for ensuring that this industry sector is in compliance with the feed-ban rule. We observed one approach for expanding the number of firms subject to the feed ban: some FDA and state inspectors we accompanied on firm inspections wrote down the names of the firm’s suppliers and customers during the inspection and checked these names against FDA’s inventory of firms to help identify additional firms. According to officials in one district where we observed this practice, they inspect these additional firms as resources allow. However, FDA does not have guidance for inspectors to do this routinely, and we observed other inspectors who did not record the names of firms’ suppliers and customers. The approach we observed was one that may be largely applied with existing resources. Congress provided FDA with an additional $8.3 million in the fiscal year 2005 budget, which FDA officials told us would be used, in part, to funds states’ efforts to identify and inspect additional firms. Under FDA’s risk-based inspection system, FDA’s goal is to annually inspect all renderers, feed mills, and protein blenders that process with prohibited material—about 570 firms—and to inspect a number of other firms that FDA considers lower risk. The number of other firms varies according to the inspection resources available. As previously stated, in total, FDA and states inspected 6,006 firms in fiscal year 2004. However, once FDA has inspected a firm and determined that it does not process with prohibited materials, FDA may not reinspect that firm for many years. In the interim, FDA does not know whether the firm has changed operations and now processes prohibited materials because it does not require firms that do so to notify the agency. FDA and state agencies only learn of a change in operations if they inspect the firms. Without a requirement to notify FDA, these firms are not annually inspected to monitor for compliance with the feed ban, as are other high-risk firms. We found that 2,833 or about 19 percent, of these firms FDA has identified as subject to the feed-ban rule have not been reinspected in 5 or more years. These firms include 1,224 farms that fed ruminant animals; 846 farms that mixed their own feed; 377 feed mills; and 386 other types of firms, such as distributors and retailers. According to FDA officials, of these four types of firms that have not been reinspected, about 2,100 or two-thirds are farms, which FDA believes are not likely to change their practices. However, feed mills, which account for about 400 of the firms, would be classified as high risk if they process with prohibited material. FDA officials also believe that the number of firms processing with prohibited material is declining and that in all likelihood firms that have not been inspected for a number of years would not change their practices and start doing so. As FDA pointed out, firms may decrease their use of prohibited material because of the requirement that they maintain records sufficient to track all receipt, processing, and distribution of that material. Nonetheless, some firms that did not use prohibited material when they were last inspected may begin to use that material in processing their feed. FDA officials told us that they have considered options for identifying firms that process feed with prohibited material, including requiring those firms to be licensed. The officials noted, however, that some firms may not comply with a notification requirement; thus, FDA would still not know about all high-risk firms, and it would incur the additional costs of overseeing the notification requirement. While FDA inspection procedures include guidance for reviewing firm documents and procedures, examining their invoices, and inspecting facilities and equipment, they do not include guidance on when samples should be taken and tested. For example, the feed-ban inspection guidance does not instruct inspectors to routinely sample cattle feed to verify firms’ claims that they do not use prohibited materials or exempt ingredients, or to ensure that firms’ cleanout and flushing procedures to prevent commingling are followed and are effective. We recognize that the usefulness of testing is limited at firms that use exempt items—cattle and other ruminant blood, milk proteins, poultry litter, and plate waste—as ingredients in cattle feed. FDA officials told us that they did not want to routinely test samples at firms during inspections because the tests would likely have many false positives as a result of the exemptions. Consequently, officials believed testing would not use resources wisely. However, in 9 of the 19 inspections we observed, inspectors could have used tests to verify feed-ban compliance because the firms claimed they did not use any animal-derived exempt items. Even in these instances, where tests would be beneficial, inspectors did not sample the feed. For instance, inspectors did not take samples to confirm the adequacy of cleanout procedures at firms that use nondedicated production facilities to manufacture cattle feed but do not use any exempt materials. FDA’s feed-ban inspection guidance allows inspectors to draw samples at their discretion, but FDA officials told us that inspectors rely on their judgment of whether the cleanout procedures appear to be adequate and rarely use testing to verify their assessment. FDA officials did not give us a clear reason why they would not advise testing in situations where tests would be useful to help confirm compliance. Some states have also done significant testing that FDA could use to verify compliance with the feed ban but do not provide their test results to FDA, although that information could give FDA a more complete picture of feed ban compliance. In response to our survey, 18 of the 38 states that have agreements with FDA to conduct feed-ban inspections told us they had collected and tested over 1,500 feed samples during 2003. For example, according to a North Carolina Department of Agriculture official, the state collected and tested 738 samples; and, according to a Kansas Department of Agriculture official, the state collected and tested 94 samples. In these states, if the tests find what appears to be prohibited material, the states followed up with the firms to determine what ingredients they used. According to the officials, no contaminated cattle feed was found. In California, which collected and tested about 100 samples, officials found tests to be useful for demonstrating to cattle feed manufacturers the difficulties of cleaning equipment that has been used for prohibited material. FDA and state agency officials told us that most California feed firms have switched to using dedicated equipment for cattle feed. Eleven of the 18 states share test results with FDA, but FDA does not use these results to verify industry compliance with the feed ban. In August 2003, FDA instructed its districts to begin testing finished feed and feed ingredients, such as bags of feed sold at retail stores and bulk feed sold to cattle feedlots. These tests were not taken in conjunction with feed-ban compliance inspections. FDA inspectors took 660 samples nationwide. The samples were submitted to FDA regional laboratories for analysis, where analysts used feed microscopy. Although in its instructions to districts for the collection effort, FDA called the tests “a method to monitor for compliance with” the feed ban, FDA officials told us that the test results could not be the sole basis for enforcement action at individual firms because microscopic analysis cannot distinguish prohibited bone and tissue from exempted material. Nonetheless, the officials also told us the testing gives FDA further assurance of industry’s compliance with feed ban. Because FDA did not use an approach that allows it to generalize the results, the test results cannot be used as assurance of industry compliance. In fact, because FDA did not provide instructions on how to randomly select firms for sampling and how to take a random sample of feed at the firms, the results cannot even help confirm compliance by the stores, feedlots, and other firms where the samples were taken. In initiating this effort without a sampling plan, FDA wasted its already limited inspection resources. FDA has committed resources to collect and analyze 900 additional samples in fiscal year 2005. With the same resources, FDA could have developed a sample design that would have allowed it to generalize the test results to industry. FDA officials told us the agency would have to conduct an investigation to determine whether an enforcement action was warranted. FDA provided us some information on test results for the 660 samples that were taken and analyzed. The data showed 145 potential violations, including 8 that FDA’s laboratories originally classified as serious. About one-third of the 145 samples with potential violations were of cattle feed. Several of those samples had evidence of mammalian matter. Without more information, we could not determine whether the cattle feed contained exempt items or prohibited material. As of February 2005, FDA was in the process of gathering the information we requested from its district offices on the results of its investigation of the 145 potential violations and what, if any, enforcement actions were taken based on the tests and follow-up investigations. We plan to provide our analysis of FDA’s collection, testing, and follow-up of these samples later this year. Animal feed and feed ingredients containing prohibited material (including material from rendered cattle) are not required to be labeled with the cautionary statement, “Do not feed to cattle or other ruminants,” when that material is intended for export. Shipping containers for such material, however, must be labeled that they are for export only; and, if prohibited material is put back into domestic commerce, the containers must be relabeled with the cautionary statement. Not placing the warning label on exported feed poses a potential risk to U.S. and foreign cattle and consumers from two perspectives. First, feed with prohibited materials could be intentionally or inadvertently redirected into feed for U.S. cattle if firms fail to add the cautionary label to the product that they had initially intended to export. Second, exported feed containing prohibited material could mistakenly be fed to cattle that are subsequently imported into the United States or whose meat and other products are imported into the United States. We observed one situation where a problem could occur because a cautionary statement was not on an exported product. One firm we visited processed fishmeal, which is normally considered a safe ingredient for cattle feed. However, this plant processed the fishmeal on the same equipment it used for prohibited materials. If it were sold domestically, the fishmeal would have to be labeled with the cautionary statement because it is potentially contaminated with prohibited materials. However, the product was shipped to overseas customers without the cautionary statement. Because the fishmeal was not labeled, and fishmeal would not be expected to contain prohibited material, customers could unwittingly mix the fishmeal with other ingredients for their cattle. The FDA inspector did not document in the inspection report which countries were sent the fishmeal. When we asked FDA officials about this situation, they were concerned only about whether feed intended for export was actually being diverted to domestic cattle, a situation that they believed was unlikely to occur because FDA rules prohibit it. However, according to the report by the international panel of experts on BSE convened by USDA, the United States has an obligation to act responsibly toward its global neighbors when exporting feed and feed ingredients. FDA officials told us that FDA cannot require the cautionary statement on feed intended for export without a change to the Federal Food, Drug, and Cosmetic Act. Under that act, animal feed intended for export only cannot be deemed to be adulterated or misbranded if it (1) meets the foreign purchasers specifications, (2) is not in conflict with laws of the country to which it is intended for export, (3) is labeled on the outside of the shipping package that it is intended for export, and (4) is not sold or offered for sale in domestic commerce. When an FDA district office learns that ruminant animals may have been fed contaminated feed, the feed-ban inspection guidance directs the district office to oversee efforts to appropriately dispose of the contaminated feed and to ensure that the animals that had consumed this feed are not slaughtered for human food or other animal feed. The guidance also advises FDA to consider coordination with USDA and the affected states. While FDA districts have monitored voluntary recalls of feed that did not comply with the feed ban, they had not been alerting USDA or state departments of agriculture when they learned that such feed had been given to cattle and other ruminants—in some cases for an extensive period of time. FDA district and headquarters officials responsible for the feed-ban program were not aware that the guidance instructed FDA to alert USDA and states. In our observations at inspections and our review of inspection records, we found the following instances in which FDA did not alert USDA or state authorities or take further action. A producer of cattle, hogs, and goats had inadvertently fed salvaged pet food containing prohibited materials to goats, which are ruminants. We observed the mislabeled feed in a March 2004 inspection. The feed mill that manufactured and sold the feed had not labeled the salvaged pet food with the required cautionary statement “Do not feed to cattle or other ruminants.” Shortly after this discovery, the firm recalled the misbranded feed. In April 2004, a state feed inspector found out about the misfed animals from the feed mill, not from FDA, and alerted his state program managers. The state contacted FDA, and after determining that FDA did not intend to take action beyond issuing a warning letter, the state seized and destroyed the animals in May 2004 under state authority to prevent the meat from entering the food supply. FDA did not alert the state or USDA and did not issue the warning letter to the feed mill until June 2004. A feed mill had inadvertently contaminated cattle feed with prohibited material. The firm had made a mistake in designing and placing equipment in the manufacturing process, which allowed spilled feed containing prohibited material to become commingled with ingredients used to make cattle feed. We observed this problem during an April 2004 inspection. FDA issued a warning letter in June 2004 demanding that the firm correct the violations; the firm also conducted a voluntary recall of the feed in June. Because the mill operated with this flawed system for about 1 year before the discovery, potentially contaminated feed was marketed and sold for cattle feed for that period of time. FDA did not contact USDA or state authorities to alert them that cattle had consumed the feed. A feed mill did not clean mixing equipment and transportation vehicles used for processing and transporting feed containing prohibited and nonprohibited materials. The firm also failed to properly label feed containing prohibited materials with the required cautionary statement and did not maintain sufficient records for tracking the sale of cattle feed to its customers, as FDA requires. We identified these problems during our review of inspection reports. The inspection occurred in March 2003. The firm corrected the violations and recalled all cattle feed that had not yet been consumed in March 2003. FDA issued a warning letter to the firm in May 2003 and took no further action. When we discussed these findings with FDA headquarters officials, they told us they were not familiar with the guidance recommending this communication. As a result, FDA, USDA, and state authorities had not assessed the health risk to humans and the animals that may have ingested that feed and may not have taken sufficient action to prevent those cattle and other ruminants from entering the human food or animal feed supply. The FDA officials said they had not considered coordinating with USDA and state officials but that USDA and the states were notified of the recalls because the recalls are posted on the FDA Web site. However, we found that the posted recall notices do not include information on whether, or for how long, cattle or other ruminants had been given the contaminated feed. Furthermore, FDA officials asserted that no action was needed beyond a recall in these incidents because BSE has not been discovered in a cow born in the United States. According to the officials, the meat would not make people ill and the feed would not make cattle ill. Before this report was issued, these same FDA officials told us that in the future, FDA will alert USDA and states when cattle may have consumed prohibited feed. USDA officials told us that they were not aware of these three incidents. They said that, had they known, USDA would have tracked the animals and tested them for BSE when they were slaughtered. According to FDA’s feed-ban rule, transportation firms that haul prohibited material and use the vehicles to haul feed or feed ingredients for cattle must have and use procedures to prevent commingling or cross-contamination. The procedures must provide for cleaning out the vehicles or other adequate preventative measures. Research suggests that cattle can get BSE from ingesting even a small amount of infected material—an amount that could be introduced in feed that was transported in a poorly cleaned vehicle. As part of an inspection of transportation firms, inspectors review the adequacy of these procedures, but the inspection form does not prompt them to do so during inspections of other types of firms. The following two problems impede the effectiveness of FDA’s current procedures: FDA has not identified and does not inspect many transportation firms. According to FDA officials and transportation data, thousands of independent truckers, large and small trucking companies, and rail companies may carry cattle feed and feed ingredients. FDA officials told us that it would be virtually impossible to identify and inspect all of these firms, given its limited resources. However, FDA agrees that transportation compliance is important. In commenting on a draft of this report, the agency noted that it is planning to increase oversight of transportation firms based on FDA’s assessment of compliance and risk in this industry sector. Inspecting transportation firms at their home base would not ensure that the required procedures are being used and that the nearly 200,000 large trucks that haul animal feed would be clean at the time they picked up cattle feed, in part, because vehicles that carry prohibited material may also carry cattle feed and other loads in succession before returning to their home base. For example, at an inspection of one high-risk protein blender, we observed an FDA inspector talking with an independent trucker who had dropped off a load of cattle feed ingredients, was picking up prohibited materials at the protein blender, and was scheduled later to pick up a load of corn, which could be used in cattle feed. The trucker explained that if he saw anything in the truck between loads, he would climb in and sweep the material out with a broom; if he did not see anything, he did not sweep out the truck between loads. The trucker also said it would be extremely difficult to find washout facilities to clean the truck between loads while on the road. Consequently, we believe that it would be more effective to require FDA and state inspectors to review and document procedures that feed mills and other firms use to ensure that the vehicles they use to haul cattle feed and feed ingredients are free of prohibited material as part of their inspections at feed mills and other firms. During our observations of inspections, we found that some FDA and state inspectors were already doing so. However, our observations and analysis of inspection reports showed that the inspectors did not routinely do so and did not uniformly report on the adequacy of the firms’ procedures for preventing the introduction of prohibited material. We believe that inspectors were overlooking the adequacy of firm’s procedures to ensure the safe transport of cattle feed because the BSE inspection form does not have any questions to capture that information. Specifically, 82 of the 404 inspection reports we reviewed were for renderers, protein blenders, feed mills, and other firms that processed with prohibited material and handled cattle feed and feed ingredients. We found that inspectors had documented the required cleanout procedures for transportation equipment at only 11 of these 82 firms. Without requiring inspectors to uniformly review and document vehicle cleaning procedures, FDA has insufficient assurance that the vehicles are safe to carry cattle feed and feed ingredients. In January 2004, FDA’s Deputy Commissioner testified that inspectors “at least annually, targeted BSE inspections of 100 percent of known renderers, protein blenders, and feed mills processing” with prohibited material. He testified that compliance by those firms was “estimated to be better than 99 percent.” Subsequently, some industry officials claimed that overall compliance with the feed ban is nearly 100 percent and used that figure to support their claim that the feed ban does not need to be strengthened. However, as noted earlier, those groups are comprised of about 570 firms—approximately 4 percent of the firms in FDA’s inventory. In addition, FDA periodically publishes compliance information on its Web site for all industry segments. This information has also been used to cite high industry compliance. However, FDA and industry do not have a basis for citing a compliance rate for a segment of firms subject to the feed ban or industrywide because there are too many unknowns. Specifically, FDA does not know the status of compliance for firms that have never been inspected, have not been reinspected in 5 or more years, and may have started to process with prohibited materials since their last inspection. Furthermore, as we previously discussed, because FDA does not routinely sample feed to confirm compliance, inspection results are largely based on a review of paper documents and a visual inspection. All these concerns apply to compliance information FDA reports to Congress and the public on its Web site. Additionally, our analysis of inspection reports also disclosed that FDA was not including all serious violations in its calculation of the compliance rate because it reclassified firms as “in compliance” once they correct violations, regardless of how long the problem may have existed. Finally, we found that FDA has classified 42 firms as having less serious violations that it counted as “in compliance” with the feed ban. Inspectors reported that 18 of these firms failed to include a cautionary statement on feed containing prohibited materials. Although FDA’s feed-ban inspection guidance designates the lack of a cautionary statement as a serious violation, and lack of such a statement should result in the feed being deemed misbranded under the Federal Food, Drug, and Cosmetic Act, FDA excluded the violations at these firms from its calculation of the compliance rate. Inspectors also reported that the remaining 24 firms had procedures for preventing commingling but did not have these procedures in writing. FDA’s guidance designates the lack of written procedures as a less serious violation, but we believe these violations should be classified as serious. Without written procedures, FDA has no assurance that the firms consistently take the necessary steps to prevent commingling. FDA officials told us that the guidance is advisory and therefore gives the agency the discretion to reclassify the violations based on its review. Diligent FDA oversight and enforcement of the feed ban is essential, not only because of the potential threat to public health but also because of the economic impact on the cattle and beef industry; this impact was clearly demonstrated by the sharp drop in U.S. beef exports after one infected cow was discovered in 2003. The ongoing discussions and agreements to reopen beef export markets could be derailed if more cattle were discovered with BSE. FDA has taken positive steps since our 2002 report. Today FDA can say with greater confidence that it has more timely and reliable inspection data. Also, the risk-based system FDA has adopted to target inspection resources on high-risk firms will increase the likelihood that firms inspected annually will remain in compliance with the feed ban. FDA’s processes, however, still have considerable room for improvement. FDA does not have uniform procedures for identifying additional firms that are subject to the ban but have never been inspected or for learning about firms that change their practices and begin to handle prohibited material. Furthermore, because inspectors are not using tests optimally—to help confirm, when appropriate, that cattle feed, production equipment, and transportation vehicles are free of prohibited material—FDA is limiting its ability to assure that firms are in compliance with the feed ban and that cattle feed is safe. Additionally, FDA is not taking advantage of state test results to provide greater assurance that industry is adhering to the feed ban and is not using its own program for sampling finished feed and feed ingredients in a manner that will allow it to project test results. Moreover, the lack of a requirement for warning labels on feed and feed ingredients intended for export that contain prohibited material, creates opportunities for having the material fed to domestic or foreign cattle, either intentionally or inadvertently. As the international group of BSE experts convened by USDA pointed out, the United States has an obligation to act responsibly toward its global neighbors when exporting feed and feed ingredients. Especially troubling was our discovery that FDA did not alert USDA and state authorities when it became aware that cattle had been given feed that contained prohibited material. FDA, and its key partner, USDA, together provide critical firewalls that the federal government has in place to protect U.S. cattle and consumers. In addition, the lack of notification was contrary to FDA’s own guidance and FDA’s inaction prevented USDA and states from being able to make an informed decision on how to respond to the discovery that cattle had consumed prohibited material. Given these weaknesses and the fact that FDA does not include all violations in its estimates, we believe FDA is overstating industry’s compliance with the animal feed ban and understating the potential risk of BSE for U.S. cattle in its reports to Congress and the American people. Despite the problems in FDA’s calculation, some in the feed industry claim that overall compliance with the feed ban is nearly 100 percent—a claim that FDA’s compliance information does not support. To further strengthen oversight and enforcement of the animal feed ban and better protect U.S. cattle and American consumers, we recommend that the Commissioner of FDA take the following nine actions: Develop uniform procedures for identifying additional firms subject to the feed ban. Require firms that process with prohibited material to notify FDA. If FDA believes it does not have the necessary statutory authority, it should seek that authority from Congress. Develop guidance for inspectors to systematically use tests to verify the safety of cattle feed and to confirm the adequacy of firms’ procedures for ridding equipment and vehicles of prohibited material before they are used for processing or transporting cattle feed or feed ingredients. Collect feed test results from states that sample feed to help verify compliance with the feed ban. Develop a sample design for FDA’s inspectors to use for sampling finished feed and feed ingredients that will allow FDA to more accurately generalize about compliance with the feed ban from the test results. Seek authority from Congress to require the cautionary statement on feed and feed ingredients that are intended for export and that contain prohibited material. Ensure that USDA and states are alerted when inspectors discover that feed or feed ingredients with prohibited material may have been fed to cattle. Modify the BSE inspection form to include questions inspectors can use to document whether firms that process or handle cattle feed or feed ingredients have procedures to ensure the cleanliness of vehicles they use to transport cattle feed and feed ingredients. Ensure that inspection results are reported in a complete and accurate context. We provided FDA with a draft of this report for review and comment. FDA stated that our report was thorough and that it recognized the enhancements FDA has put in place in its feed-ban program. However, FDA said the report did not identify material weaknesses to support our position that oversight weaknesses limit FDA’s program effectiveness and place U.S. cattle at risk of spreading BSE. FDA believes that its current risk-based inspection approach is adequate to protect U.S. cattle. According to FDA, given the wide variety of firms subject to the feed ban and its resource limitations, it “is obligated to set priorities for inspecting a meaningful subpopulation of these regulated firms.” We recognize that FDA has made many improvements, including adopting a risk-based approach for inspections, that have substantially improved its oversight of the feed-ban rule. However, our report identifies significant problems in FDA’s oversight that continue to place cattle at risk for BSE. The importance of a strictly enforced feed ban is heightened now that BSE has been found in North American cattle. As Harvard and the international panel of experts pointed out, the feed ban is the most important fire wall against the spread of BSE. Given the problems we identified and the significance of a well enforced feed ban, it is important that FDA improves its feed ban oversight and optimizes its use of resources. In addition, FDA does not agree with our criticism of its compliance reporting. FDA believes that it provides the inspection results in a transparent, complete, and accurate context. FDA notes that the BSE inspection data posted on its Web site “allows the user to analyze the data, in a multitude of ways, to provide their own contextual reference.” Our concern is precisely that the data are being analyzed and interpreted in an erroneous context. Specifically, when FDA and industry used those data to assert a 99 percent compliance rate with the feed ban, they took that information out of context. While FDA’s calculation of compliance by a subset of regulated industries may in fact be quite high, FDA’s data are not sufficient to make that projection to all regulated industries. In addition, FDA does not know the status of compliance for firms that have never been inspected or have not been reinspected in years. Nor does it know if previously inspected firms have started using prohibited material. Furthermore, because FDA reclassifies firms from “out-of-compliance” to “in-compliance” on its Web site when the firms correct violations, the information posted on that Web site does not tell the user when serious and/or long-standing violations have occurred. Lastly, inspection results are largely based on a review of paper documents and a visual inspection, with little or no feed testing. Given these data concerns and compliance unknowns, FDA’s data should not be used to project industry compliance; and, anytime those data are cited, they should be reported in a complete and accurate context. Regarding the nine recommendations we make in the report, FDA did not take issue with the need for five and generally disagreed with four. Although FDA noted implementation concerns, it did not take issue on the need for (1) developing uniform procedures for identifying firms subject to the feed ban, (2) collecting test results from the states that sample feed, (3) including a cautionary statement on feed and feed ingredients intended for export, (4) notifying USDA and states when feed or feed ingredients containing prohibited material may have been fed to cattle, and (5) modifying the inspection form to include questions to better oversee the cleanliness of vehicles used to transport cattle feed or feed ingredients. FDA disagreed with our recommendation that it require firms that process with prohibited material to notify the agency. FDA believes that it is already getting information on changes to firms’ practices from states and that requiring an additional notification process would be costly to implement. However, FDA acknowledged that it has generally not identified high-risk feed salvagers and farms that mix their own feed or those that feed cattle as well as other animals. The cost of the notification program will depend on the requirements FDA puts in place. In developing the program, FDA could target the notification to firms that pose a potentially high risk for exposing cattle feed to prohibited material. We believe that FDA should know which firms are high risk and that industry self-reporting is a mechanism that would help the agency identify those firms and help it ensure compliance with the feed ban. FDA also disagreed with our recommendation to systematically use tests in conjunction with compliance inspections. While we recognize the limitations of current test methodologies, we believe that tests are useful. In fact, states and FDA are currently using these tests on feed. Our recommendation speaks to systematically using these tests where appropriate, to augment inspections, which are largely observation and paperwork reviews. We expanded the recommendation to recognize that FDA may validate other tests in the future. With respect to our recommendation that FDA develop a sample design for testing finished feed and feed ingredients, FDA disagreed with the need for a sample design that will allow it to more accurately generalize about compliance. FDA stated that tests alone cannot serve as a basis to generalize compliance. We agree that tests that indicate potential violations need to be confirmed, because of the limitations of the current tests. However, FDA is using the test results to identify potential problems, and it tested 660 samples in 2003/2004 and plans to test 900 samples this year. The point of our recommendation is that any testing activity of this magnitude should have a sampling plan. Finally, FDA believes that it already reports inspection results in a complete and accurate context, as we recommend. We disagree. As noted above, given the data concerns and compliance unknowns raised in this report, FDA’s data should not be used to project industry compliance. Anytime those data are cited, they should be reported in a complete and accurate context. FDA also provided technical comments, which we have incorporated into this report, as appropriate. FDA’s written comments and our responses are in appendix VI. We also provided USDA with a draft of appendix III, which summarizes FDA’s and USDA’s actions in response to the 2003 discovery of BSE in North America, for review and comment. USDA had no comments on the draft appendix. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to interested congressional committees; the Secretary of Health and Human Services; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix VII. As discussed below, to assess the effectiveness of the Food and Drug Administration’s (FDA) actions to ensure industry compliance with the feed ban and protect U.S. cattle from bovine spongiform encephalopathy (BSE), we (1) analyzed 404 inspection reports for BSE inspections performed during fiscal year 2003 and 2004; (2) observed 19 inspections in 12 states that were conducted by either FDA or state inspectors; (3) assessed the reliability of FDA’s feed-ban inspection database; (4) interviewed officials at FDA headquarters and district offices, state agencies, and industry associations, as well as reviewed documents provided by these officials concerning oversight of the animal feed ban; and (5) surveyed state agency officials in 38 states. To assess FDA’s oversight, we analyzed BSE inspection records to identify types of firms inspected; types of material processed (prohibited, nonprohibited, or both); oversight of transportation equipment; violations identified during inspections (if applicable); and final inspection classifications. We randomly selected 413 inspection reports from the universe of BSE feed inspections conducted during fiscal year 2003 and fiscal year 2004 (up to February 7, 2004). For each of the 18 FDA districts, responsible for inspections in the 50 states, we randomly selected inspection reports from one state (most FDA district offices cover more than one state). We included all of the 314 high-risk firms that process prohibited materials for the 18 selected states. In addition, we randomly selected 12 other firms that process with prohibited materials; 68 firms that distribute prohibited materials; and 19 firms that do not process or distribute prohibited materials. We examined only 404 of the 413 inspection reports because 9 of the report files that we requested were still open-case files at the time of our review. To evaluate the inspection process, we accompanied inspectors on 19 BSE inspections of firms in 12 states covered by the feed ban. The sites were selected to cover a range of firm types and sizes in various geographic locations with concentrations of cattle feeding operations, including dairy cattle. The 19 inspections included renderers, protein blenders, feed mills, farms with ruminants and other animals, and pet food manufacturers. Seven of these firms processed or handled only prohibited material, and the remaining 12 processed or handled both types of material. On 12 of the inspections, we accompanied FDA inspectors, and on 7 we accompanied state inspectors. To assess the reliability of the data FDA uses when reporting industry compliance, we analyzed the agency’s database for inspections conducted on or after April 15, 2002, when FDA implemented its newly designed feed- ban database. Specifically, we analyzed the 9,230 inspection records in this database, as of February 7, 2004. To complete the reliability assessment, we (1) reviewed existing documentation related to the data sources; (2) electronically tested the data to identify obvious problems with completeness, accuracy, or timeliness of data entry; and (3) interviewed knowledgeable agency officials about the data. We determined that the data were sufficiently reliable for purposes of this report. We interviewed officials or reviewed documents at FDA headquarters and at the 18 FDA district offices that are responsible for overseeing and enforcing the feed ban in the 50 states, maintaining the inspection database system, and proposing and analyzing regulatory decisions. In the 18 district offices, we used a structured interview to uniformly gather information on various issues, such as methods used to identify the universe of firms subject to the feed ban; the process for selecting firms for inspection; training programs for FDA and state inspectors; feed-ban inspection guidance and procedures; the processes for reviewing inspection results, classifying findings, and determining what, if any, enforcement action should be taken; and oversight of contracts and agreements with state agencies that perform BSE inspections. We received information and documentation on FDA’s oversight and enforcement of the feed ban from the following specific FDA units: Center for Veterinary Medicine, Office of Management, Office of Surveillance and Compliance; Office of Regulatory Affair’s Office of Regional Operations; Center for Food Safety and Applied Nutrition’s Office of the Director; and Office of the Chief Counsel. We reviewed various FDA program documents, including the BSE/Ruminant Feed Ban Inspections Compliance Program Guidance; BSE feed inspection form; advance notices of proposed rulemakings to strengthen the feed ban, including public comments; and the reports on the feed samples collected and tested. We also interviewed state agency officials and reviewed documents from the California Department of Food and Agriculture; the Departments of Agriculture of Georgia, Illinois, Kansas, Missouri, North Carolina, and Pennsylvania; and the Texas Feed and Fertilizer Control Service. Lastly, we interviewed officials and reviewed documents from the American Feed Industry Association, the Association of American Feed Control Officials, the National Renderers Association, the Association of Analytical Communities, and the Harvard Center for Risk Analysis. To understand the role that states play in the feed inspection program, we surveyed state officials in the 38 states that have contracts or other agreements with FDA to perform feed-ban compliance inspections and report the inspection results to FDA. The survey included questions about the states’ inspection programs, testing of animal feed ingredients, and FDA’s training and guidance for feed-ban inspections and enforcement. Before implementing our survey, we pretested the questionnaire with state agriculture officials in five states. During these pretests, we interviewed the respondents to ensure that (1) questions were clear and unambiguous, (2) terms were precise, and (3) the survey did not place an undue burden on the staff completing it. We received completed questionnaires from all 38 states surveyed. The state information presented in this report is based on information obtained from this survey and interviews with state officials. We performed our work from October 2003 through January 2005, in accordance with generally accepted government auditing standards, which included an assessment of data reliability and internal controls. Develop a strategy, working with the states, to ensure that the information FDA needs to oversee compliance is collected and that all firms-subject to the feed ban are identified and inspected in a timely manner. FDA (1) developed a new BSE inspection form that provides guidance to FDA and state feed-ban inspectors on how to uniformly and completely document firm’s operations and assess compliance, (2) designated a BSE program coordinator in each district office who is responsible for ensuring that inspection reports are accurate and completed timely, and (3) provided training for FDA and state inspectors on conducting and documenting BSE inspections. FDA has not developed a uniform strategy to identify all firms subject to the feed ban or to ensure that all firms are inspected in a timely manner. Ensure that, as contractors modify the inspection database, they incorporate commonly accepted data management and verification procedures so that the inspection data can be useful as a management and reporting tool. FDA implemented a newly designed BSE feed-ban database and data-entry procedures designed to more reliably track feed-ban inspection results. The new database, a module of FDA’s Field Accomplishment and Compliance Tracking System, contains commonly recognized database management and verification procedures, such as unique identifiers for each inspected firm and edit checks to help ensure that data entered is complete and valid. Develop an enforcement strategy with criteria for actions to address firms that violate the ban and time frames for reinspections to confirm that firms have taken appropriate corrective actions. FDA issued feed-ban inspection guidance to FDA and state inspectors and program managers for determining compliance with the animal feed ban and to help ensure that BSE feed inspections and enforcement actions are conducted in a uniform manner and are of high quality. Track enforcement actions taken by states. FDA does not plan to track enforcement actions taken by states, as we had recommended. Officials told us that FDA and state enforcement actions would not be comparable because state standards for initiating an action may not be equivalent to FDA standards. As a result, FDA believed that the information would be misleading if presented collectively. In order to strengthen inspections of imported products that could pose a risk of BSE, we recommended that the Secretaries of Health and Human Services and of Agriculture, in consultation with the Commissioner of Customs: Develop a coordinated strategy, including identifying resource needs. FDA hired more than 655 additional food security personnel and increased its port-of-entry food examinations, including imported animal feed that could pose a risk of BSE. As part of the prior notice requirement of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, FDA and the U.S. Customs and Border Protection announced that they have integrated their information systems, which allows FDA staff to more efficiently evaluate and process each import entry. FDA and U.S. Customs and Border Protection signed a memorandum of understanding under which FDA commissions Customs officers in ports and other locations to conduct, on FDA’s behalf, investigations and examinations of imported food, including animal feed. Currently, FDA has commissioned over 8,000 Customs officers. To further help consumers identify foods and other products that may contain central nervous system tissue, we recommended that the Secretary of Health and Human Services: Consider whether the products it regulates, including food, cosmetics, and over-the-counter drugs, should be labeled to advise consumers that the products may contain central nervous system tissue. FDA does not intend to label these products, as we recommended. Officials told us that that the decision to label products has to be based on science and if the presence of central nervous system tissue poses a human risk, then it should not be allowed as an ingredient in the product. FDA issued an interim final rule in July 2004 that prohibits the use of certain cattle material, including central nervous system tissue from nonambulatory cattle, in human food, including dietary supplements, and cosmetics. Canadian government reported that a single cow from Alberta had tested positive for BSE. FDA began working with USDA, other federal agencies, and Canadian officials to gather additional information about this cow, including its location, previous ownership, and records about its feed. USDA temporarily halted imports of live ruminant animals and most ruminant products from Canada. FDA learned from the Canadian government that rendered material from the BSE-infected cow may have been used to manufacture pet food, some of which was shipped to the United States. FDA notified the U.S. pet food firm that received the feed ingredients and the firm requested that customers who may have purchased the suspect product hold it for pickup by the distributor. USDA announced it would allow certain ruminant products from Canada to enter the United States under permit. These include boneless beef from cattle under 30 months of age and boneless veal from calves that were 36 weeks of age or younger. USDA announced a proposed rule, published on November 2003, to allow the importation of certain low-risk, live ruminant animals and ruminant products from Canada. USDA released the results of the second Harvard BSE risk assessment. The study found that even if infected animals or ruminant feed material entered the United States from Canada, the risk of BSE spreading within the U.S. herd is low. USDA collected samples from a nonambulatory cow and diverted all potentially high-risk material (central nervous system tissue) from the human food supply and into the animal rendering process. USDA laboratory test results are “preliminary positive” for BSE. December 23, 2003 USDA’s Animal and Plant Health Inspection Service (APHIS) notified FDA’s Office of Crisis Management that a “presumptive positive” finding of BSE in the Washington cow. USDA announced a “presumptive positive” finding of BSE. USDA sent a sample from the infected animal to a world reference laboratory in the United Kingdom for final confirmatory testing. FDA activated its Emergency Operations Center and began to implement its BSE Emergency Response Plan. FDA headquarters and district office staff participated in a teleconference with APHIS and Washington State officials to ensure a coordinated response to the incident. APHIS quarantined the cattle herd where the BSE- infected cow last resided and began an epidemiological investigation. USDA’s Food Safety and Inspection Service (FSIS) initiated a recall of the over 10,000 pounds of meat from the group of 20 cattle slaughtered on December 9. FDA dispatched several teams of investigators to find any FDA-regulated products that were or could have been made from the infected cow, including animal feed. The world reference laboratory in the United Kingdom confirms USDA’s BSE diagnosis. FDA announced that an estimated 2,000 tons of feed that could contain potentially infectious material from the BSE-infected cow was found before any of it was used to manufacture animal feed. According to FDA, the feed was disposed of in a landfill in accordance with federal, state, and local regulations. USDA’s investigation with Canadian officials indicated that the BSE-infected cow was likely imported from Canada in 2001 and was about 6½ years old. USDA identified 73 other cattle that were imported from Canada in the same shipment with the BSE- infected cow. USDA determined that the recalled meat products had been distributed to Alaska, California, Guam, Hawaii, Idaho, Montana, Nevada, Oregon and Washington. USDA appointed an international team of scientific experts to review its BSE investigation and make recommendations following the completion of the epidemiological investigation. USDA’s and Canada’s chief veterinary officers held a joint press conference to announce that DNA evidence indicated—with a high degree of certainty—that the BSE-positive cow found in Washington State originated from a dairy farm in Alberta, Canada. FSIS issued an interim final rule, effective January 12, 2004, that, among other things, prohibited the use of brain, skull, spinal cord, and other specified tissues of cattle 30 months or older for human food, and required that all nonambulatory animals presented for slaughter be condemned. FSIS also gave notice that it would no longer pass and give a mark of inspection to carcasses and cattle parts selected by APHIS until the sample is determined to be negative. FDA announced that it would be issuing interim final rules to strengthen existing BSE firewalls, including banning a wide range of cattle material from human food, dietary supplements, and cosmetics, and strengthening the 1997 feed ban through an extended list of banned feeding and manufacturing practices. USDA completed its investigation of the Washington State BSE case. Following the international scientific review panel’s recommendation, USDA began an enhanced BSE surveillance program targeting cattle from highest-risk populations, as well as a random sampling of animals from the aged cattle population. FDA requested information and public comment on additional measures that are being considered for strengthening the 1997 feed ban. FDA requested this information because the international scientific review panel convened by the Secretary of Agriculture recommended broader measures than FDA had previously announced it would be issuing as part of an interim final rule, such as banning all mammalian and poultry protein from ruminant feed. USDA asks for public comment on additional preventative actions that are being considered concerning BSE, such as implementation of a national animal identification program. FDA issued an interim final rule that prohibits certain cattle material from human food, dietary supplements, and cosmetics. September 30, 2004 FDA announced the availability of industry guidance “Use of Material from BSE-Positive Cattle in Animal Feed.” The U.S. General Accounting Office (GAO), an agency of the U.S. Congress, is studying FDA’s Feed Inspection program at the request of the Senate Committee on Agriculture, Nutrition, and Forestry. In order to ensure that your data are entered accurately, please use blue or black ink to enter your answers. Return the original copy of the completed questionnaire to us. As part of our study we are surveying states that have contracts with FDA to conduct BSE inspections or who have agreements or arrangements with FDA to share data from their state inspections. We suggest you keep a copy of your completed questionnaire for your records. Your cooperation is critical to our ability to provide current and complete information to the Congress. You will be notified when the report is issued and you will be able to request a free copy of the report at that time. John Smith GAO Atlanta Field Office 2635 Century Parkway, Suite 700 Atlanta, GA 30345 Please return your completed questionnaire to us by June 25, 2004. Several questions ask for data from 2003 and for projected data for 2004. In answering these questions, please use the year that your state uses in planning, scheduling, monitoring, and reporting the BSE inspections in your state. 3. Does your state have laws and regulations covering the adulteration and misbranding of animal feed? (Please check one.) 1. What is the year you used for planning, scheduling, monitoring and reporting to FDA the data from BSE inspections done in your state in 2003? Reporting Year Used for Inspections in 2003: (Please check one.) 4. Does your state have laws and regulations 47.4% (1) Federal Fiscal Year specifically covering labeling of animal feed for BSE? (Please check one.) (2) Calendar Year 36.8% (1) Yes Skip to Question 6 31.6% (3) Please provide months and days below. From: (Month/Day) Please provide months and days below. regulation that would require labeling of animal feed for BSE? From: (Month/Day) To: (Month/Day): 2. What is the year you are using for planning, scheduling, monitoring and reporting to FDA the data from BSE inspections being done in your state in 2004? Reporting Year Used for Inspections in 2004: (Please check one.) 6. Does your state have laws and regulations specifically covering BSE animal feed inspections? (Please check one.) 17 45.9% 1 2.7% (1) Federal Fiscal Year (2) Calendar Year (3) Please provide months and days below. From: (Month/Day) From: (Month/Day) Please provide months and days below. From: (Month/Day) To: (Month/Day): 7. Has your state referenced any of the 9. Do your state laws or regulations give you following in your state laws and regulations? (Please check all that apply.) authority to inspect transportation firms for compliance with the BSE feed ban? (Please check one.) 14.3% (1) Referenced all of: American Association of Feed Control Officials (AAFCO) Model Regulation.12, Certain Mammalian Proteins Prohibited in Ruminant Feed 0.0% (2) Referenced part of: American Association (1) Yes 41.7% (2) No of Feed Control Officials (AAFCO) Model Regulation.12, Certain Mammalian Proteins Prohibited in Ruminant Feed 10. Do your state laws or regulations require 85.7% (3) Referenced all of: 21 CFR § 589.2000 Animal Proteins Prohibited in Ruminant Feed firms that handle both prohibited and non- prohibited materials use dedicated equipment? (Please check one.) Of the above. 100.0% (2) No Skip to Question 12 (Please provide citation.) 11. Does your state’s requirement for dedicated equipment for prohibited and non-prohibited materials apply to transportation firms? (Please check one.) 8. Does your state plan to reference any of the following in future state laws or regulations? (Please check all that apply.) (1) Yes (2) No 36.7% (1) Do not plan to reference any of the 34.5% (2) Plan to reference all of: American 12. In developing your BSE Inspection following. Association of Feed Control Officials (AAFCO) Model Regulation.12, Certain Mammalian Proteins Prohibited in Ruminant Feed 35 (92.1%) of the respondents provided an answer. 24.1% (4) Plan to reference all of: 21 CFR § 589.2000 Animal Proteins Prohibited in Ruminant Feed 6.9% (5) Plan to reference part of: 21 CFR § 589.2000 Animal Proteins Prohibited in Ruminant Feed 24.1% (6) Plan to reference definitions used in either of the above. - 13. During the year, how often does your state 15. Did your state perform any BSE discuss your BSE Inspection Workplan with FDA staff? (Please check one.) inspections during 2003 that are not reported in your answer to Question 14? (Please check one.) (1) Weekly or more frequently 10.5% (2) Monthly 23.7% (3) Quarterly (4) Annually 31.6% (1) Yes How many? Total: ~700 (N = 10) 16. What type of arrangement(s) does your 39.5% (5) As needed, based on changes to the feed ban, regulations or guidance 7.9% (6) Other (Please specify.) state have with FDA for 2004 and what is the projected number of inspections that will be completed under each type of arrangement? (Remember to use your state’s reporting year.) For 2004: (Please check all that apply and fill in number of inspections where checked.) 94.6% (1) A contract with FDA to perform BSE inspections and report results to FDA 14. What type of arrangement(s) did your state have with FDA during 2003 and how many inspections were done under each type of arrangement? (Remember to use your state’s reporting year.) For 2003: (Please check all that apply and fill in number of inspections where checked.) Number of BSE inspections done under agreement or other arrangement: inspections during 2004, or do you expect to perform inspections, that are not reported in your answer to Question 16? (Please check one.) 21.0% (4) Other BSE inspections performed by state inspectors with results reported to FDA Number of BSE inspections done by state and results reported to FDA:: 32.4% (1) Yes How many? Total: ~900 (N = 9) 67.6% (2) No - 18. For each of the firm types listed below, please indicate whether or not: (a) your state is authorized to inspect that type of firm, (b) your state conducts routine BSE inspections of that firm type, (c) your state requires registration or licensing of that firm type, and (d) your state has identified all possible firms of that type. (For each firm type, please check yes or no for each question.) C. Does your state require best of your knowledge, have you or registration or licensing for this type firm type? type? of firm? of this type? FDA-licensed feed mills for commercial feed Non-FDA licensed feed mills for commercial feed Protein blenderFarmers/ranchers who raise ruminants and non- ruminant animals Farmers/ranchers who raise only ruminants On-farm mixer (on-farm use only) Animal food or pet food salvagerFour states reported that there are no renderers in their state. Two states reported that there are no protein blenders in their state. One state reported that there are no pet food manufacturers in their state. Three states reported that there are no animal food or pet food salvagers in their state. - 19. What documentation does your state complete for each BSE inspection it performs under your state’s authority? (Please check all that apply.) 22. Who, in your state organization routinely makes the final inspection decision as to whether a firm is in compliance with your state’s regulations? (Please enter position title(s) in box. Do not enter names.) N = 28 (1) FDA’s BSE Checklist 11.1% (2) BSE Checklist developed 33.3% (3) Form FDA 481- Computer 27.8% (4) Form FDA 483 – Inspectional (5) Other inspection forms from 27 (96.4%) of the respondents provided an answer. 23. Under your state’s authority which, if any, 30.6% (6) Other (Please specify.) of the following enforcement actions can you take against a firm not in compliance with your state’s laws or regulations? (Please check all that apply.) (1) Warning letter 20. What documentation do you submit to FDA as part of BSE inspections that are done under your state’s authority? (Please check all that apply.) 89.5% (2) Stop sale of product 86.8% (3) Product seizure or confiscation 78.8% (1) FDA’s BSE Checklist 55.3% (5) Recall of product 71.0% (6) Criminal or civil prosecution 6.1% (2) BSE Checklist developed 27.3% (3) Form FDA 481- Computer 24.2% (4) Form FDA 483 – Inspectional 24.2% (5) Other inspection forms from 36.4% (6) Other (Please specify.) (7) Other (Please specify) 24. How frequently does your state report to FDA information about BSE enforcement actions taken under your state’s authorit y? (Please check one.) 21. Under your state’s authority, does your state make compliance decisions associated with BSE inspections? (Please check one.) 67.6% (1) Always Skip to Question 26. (2) Almost always 73.7% (1) Yes 26.3% (2) No Skip to Question 23. 5.4% (3) Sometimes 2.7% (4) Occasionally (5) Never 25. Please describe (a) the conditions or circumstances, including type of inspection and (b) the type of violations for BSE enforcement actions not usually reported to FDA. (Please use the space below, or, if you prefer, attach a separate sheet with your answer.) N = 12 9 (75.0%) of the respondents provided an answer. 4 states responded that minor technical violations would not be reported to FDA. 4 states responded that violations found under state authority would not be reported to FDA. 26. For each of the firm types listed below, what is the level of compliance in your state with the BSE feed ban? (Please check one in each row.) FDA-licensed feed mills for commercial feed Non-FDA licensed feed mills for commercial feed Farmers/ranchers who raise ruminants and non-ruminant animals Farmers/ranchers who raise only ruminants On-farm mixer (on-farm use only) Animal food or pet food salvagerFour states reported that there are no renderers in their state. Two states reported that there are no protein blenders in their state. One state reported that there are no pet food manufacturers in their state. Three states reported that there are no animal food or pet food salvagers in their state. Section III: Testing of Animal Feed Ingredients 27. Do you take samples of animal feed to test 32. Do you routinely share the results of these for prohibited materials as part of BSE inspections that are done under your tests with FDA? (Please check one.) (1) Yes 38.9% (2) No 52.6% (2) No Skip to question 33. state’s authority? (Please check one.) 33. Does FDA direct your state to take samples 28. When did you start collecting animal feed samples to test for prohibited materials? (Please enter month and year below.) of animal feed to test for prohibited materials as part of BSE inspections that are done for FDA? (Please check one.) Dates ranged from September, 1997 to June, 2004. 18 states reported dates. 92.1% (2) No Skip to question 37. 29. How many samples did you collect and test 34. How many samples did you collect and test in 2003 (as part of a BSE inspection done under your state’s authority)? (Please enter number in box.) in 2003 (as part of a BSE inspection done for FDA)? (Please enter number in box.) Total: ~100 N = 3. 30. How many samples do you plan to collect 35. How many samples did you collect and test and test in 2004? (Please enter number in box.) in 2004 (as part of a BSE inspection done for FDA)? (Please enter number in box.) Total: ~100 N = 2 31. What type(s) of test(s) did you use? 36. What type(s) of test(s) did you use? (Please check all that apply.) (Please check all that apply.) (1) Feed microscopy 22.2% (2) PCR (polymerase chain 33.3% (3) Elisa (enzyme-linked reaction) 0.0% (1) Feed microscopy 0.0% (2) PCR (polymerase chain reaction immunosorbent assay) assay) (4) Other (Please specify.) 1 50.0% 1 50.0% (3) Elisa (enzyme-linked immunosorbent (4) Other (Please specify.) Section IV: FDA Training and Guidance for BSE Inspection and Enforcement 37. Does FDA provide sufficient training on BSE inspection and enforcement? (Please check one.) 40. When your state inspectors or supervisors have questions on potential violations and enforcement actions are they answered by FDA in a timely manner? (Please check one.) 47.4% (1) Definitely yes (2) Probably yes 7.9% (3) Uncertain 73.7% (1) Always or almost always 21.0% (2) More than half of the time 13.2% (4) Probably no (5) Definitely no (3) About half of the time 0.0% (6) No basis to judge 0.0% (4) Less than half of the time 0.0% (5) Never or almost never (6) No basis to judge 38. When your state inspectors or supervisors have technical questions on performing inspections, are they answered by FDA in a timely manner? (Please check one.) 41. How satisfactory are the answers that FDA 78.9% (1) Always or almost always 13.2% (2) More than half of the time provides to your state inspectors’ or supervisors’ questions about potential violations and enforcement actions provided by FDA? (Please check one.) (3) About half of the time 55.3% (1) Very satisfactory 31.6% (2) Somewhat satisfactory 2.6% (4) Less than half of the time 0.0% (5) Never or almost never (6) No basis to judge 5.3% (4) Somewhat unsatisfactory 0.0% (5) Very unsatisfactory 39. How satisfactory are the answers that FDA provides to your state inspectors’ or supervisors’ technical questions on performing inspections? (Please check one.) (6) No basis to judge 71.0% (1) Very satisfactory 28.9% (2) Somewhat satisfactory 0.0% (4) Somewhat unsatisfactory 0.0% (5) Very unsatisfactory (6) No basis to judge (For the following questions, please use the space here to provide your answers, or, if you attach a separate sheet with your answer.) 42. In your opinion, what areas of FDA’s BSE inspection program seem to be working well and what areas need to be improved? N = 38 29 (76.3%) of the respondents provided comments. 18 states responded that the BSE inspection program is working well, especially for inspections of renderers, protein blenders, and feed mills. 7 states responded that FDA needs to place more emphasis on-farm mixers and feeding operations. 4 states responded that FDA needs to place more emphasis on transportation of animal feed. 6 states responded that FDA needs to share inspection results and enforcement actions with state agencies. 2 states responded that FDA needs to be more decisive in taking enforcement action, when warranted. 43. No questionnaire of this type can cover all aspects of a topic. If you have further concerns or comments concerning FDA’s BSE inspection program, please comment below. Or, if you prefer, mail or email your comments to us separately. N = 38 13 (34.2%) of the respondents provided comments, however 1 did not attach comments to questionnaire, resulting in only 12 responses (31.6%). Food and Drug Administration BSE Inspection Program: Survey of States with Contracts and Other Agreements with FDA Please complete for all individuals providing information for this questionnaire. Please attach additional sheets if needed. Thank You! FDA feed ban took effect, prohibiting certain materials in ruminant feed to prevent the establishment and spread of BSE if it were to appear in U.S. cattle herds. FDA took this action because it had been an industry practice to feed proteins to ruminant animals that could transmit the infective agent that causes BSE. Additionally, research in the United Kingdom suggested that variant Creutzfeldt-Jacob Disease (vCJD) in humans is linked to eating cattle infected with BSE. The feed ban requires that firms, with some exceptions, take the following actions: label feed and feed ingredients that contain most proteins from mammals (prohibited material) with a cautionary statement “Do not feed to cattle or other ruminants,” have procedures to protect against commingling or cross-contamination if they handle both prohibited and nonprohibited feed and feed ingredients by using either equipment dedicated exclusively to feed or ingredients intended for cattle or using cleanout procedures or other adequate means to prevent carryover, and maintain records so that feed and feed ingredients that contain or may contain prohibited material can be tracked from receipt through disposition. According to FDA’s rules, firms that transport both types of materials must also follow these procedures. Additionally, prohibited materials may be used in pet food and in feed for poultry, swine, horses, and other nonruminant animals. Lastly, FDA designated a number of cattle- and other animal-derived items as exempt from the ban—and hence, allowable in cattle feed. These items include blood and blood products, plate waste, gelatin, milk and milk protein, and any product whose only mammalian protein consists entirely of protein from pigs and horses. FDA has also not regulated the use of poultry litter in feed. FDA held a public hearing to solicit information and views regarding ways in which the current feed ban and its enforcement might be improved or to determine if any new objectives should be considered. FDA took this action because BSE had spread beyond the United Kingdom to most countries in western and central Europe and Japan. FDA asked for responses to 17 questions, including the following: Should FDA require dedicated facilities for the production of animal feed containing mammalian protein? Should FDA require dedicated transportation of animal feed containing mammalian protein? Should FDA license renderers and other firms engaged in the production of animal feed containing mammalian proteins? Should FDA revoke or change any of the current exemptions in the current rule? Should FDA require pet food to contain the cautionary statement? Should FDA extend the recordkeeping requirement beyond 1 year? Should FDA request authority to assess civil monetary penalties? FDA published an advanced notice of proposed rulemaking announcing that it was considering revising the feed ban and asking the public to comment on certain possible modifications. FDA explained that shortly after its October 2001 public hearing, USDA released a report by the Harvard Center for Risk Analysis on the findings of a major, 3-year initiative to develop a risk assessment model and assess the risk of BSE in the United States. The model concluded that the risk to U.S. cattle and to consumers from BSE is very low, but certain new control measures could reduce that small risk even further. Therefore, based on comments received at the public hearing and the findings of the Harvard Study, FDA asked for public comment on various ways that the BSE feed ban could be strengthened, including the following questions: Should tissues that are known to be at higher risk for harboring the infective agent for BSE, such as brain and spinal cord from ruminants 2 years of age or older be excluded from all rendered products? How extensive is the use of poultry litter in cattle feed, what is the level of feed spillage in poultry litter, and what would be the impacts resulting from banning poultry litter in ruminant feed? Should pet food for retail sale carry the cautionary statement “Do not feed to cattle or other ruminants?” Are there practical ways, other than dedicated facilities, for firms to demonstrate that the level of carryover of prohibited material in a feed mill could not transmit BSE to cattle or other ruminants? If so, what is the safe level of carryover of prohibited material and what is the scientific rationale for establishing this safe level? To what extent is plate waste used in ruminant feed and what would be the impacts from excluding this material from ruminant feed? FDA announced that it would be issuing interim final rules to strengthen existing BSE firewalls, including banning a wide range of cattle material from human food, dietary supplements, and cosmetics, and strengthening the 1997 feed ban through an extended list of banned feeding and manufacturing practices. FDA, with USDA, announced that the agencies are considering additional measures to protect the public from the health risk associated with BSE and to prevent the spread of the disease in U.S. cattle and are asking for public comment. The agencies are considering additional safeguards based on the recommendations of a panel of international experts convened by the Secretary of Agriculture to review the U.S. regulatory response following the finding of a BSE-positive cow in Washington State in December 2003. In addition to some of the measures FDA had planned to take in an interim final rule, the international panel recommended broader measures, such as banning all mammalian and poultry protein from ruminant feed. Since these recommendations would require significant changes in current feed manufacturing practices and could make some previously announced proposals unnecessary, FDA requested additional information and public comment on the panel recommendations and other measures, including the following: What information is available to support or refute the assertion that removing tissues that are known to be at higher risk for harboring the BSE infective agent, such as brain and spinal cord tissue, from all animal feed is necessary to effectively reduce the risks of cross-contamination of ruminant feed or of misfeeding on the farm? If FDA prohibits high-risk tissues from all animal feed, would there be a need to require dedicated facilities, equipment, storage, and transportation? What information is available to support banning all mammalian and poultry meat and bone meal from ruminant feed? If FDA prohibits high-risk tissues from all animal feed, what information is available to support banning all mammalian and poultry meat and bone meal from ruminant feed? Can high-risk tissues be effectively removed from dead stock and nonambulatory cattle so that the remaining material can be used in animal feed, or is it necessary to prohibit the entire carcass from use in all animal feed? Do FDA’s existing authorities under the Federal Food, Drug, and Cosmetic Act and under the Public Health Service Act provide a legal basis to ban the use of high-risk cattle tissues and other cattle material in nonruminant animal feed, given that such materials have not been shown to pose a direct risk to these animals? FDA also issued an interim final rule on July 14, 2004, to prohibit certain cattle materials in FDA-regulated food, including dietary supplements, and cosmetics, to minimize potential human exposure to the BSE infective agent. Specifically, FDA prohibited use of the brain, skull, spinal cord, and other specified tissues of cattle that are 30 months or older; small intestine and tonsils of all cattle; material from nonambulatory disabled cattle or cattle not inspected and passed for human consumption; and beef that is mechanically separated from bones. FDA took this action in response to the finding of a BSE-positive cow in Washington State in December 2003 and to conform with an interim final rule issued by USDA in January 2004 declaring these materials unfit for human consumption. The following are GAO’s comments on the Food and Drug Administration’s letter dated January 13, 2005. 1. We believe the report identifies numerous oversight weaknesses that continue to limit program effectiveness and place cattle at risk. The purpose of the feed ban firewall is to prevent the exposure and spread of BSE. A well enforced feed ban is even more critical now that BSE has been discovered in cattle in North America. As shown in our report, FDA does not know the compliance status or risks posed by firms it has not identified, inspected or reinspected for many years. FDA acknowledged that many more firms are subject to the feed ban than have been inspected to date but said the agency must set priorities for the number and types of firms it can identify and inspect with limited inspection resources. We agree with FDA’s use of a risk-based inspection approach; however, FDA acknowledges the need to increase inspections of certain industry segments, such as transporters and animal feed salvagers. Moreover, for firms that FDA inspects, it does not routinely sample feed to verify whether the operating procedures observed by its inspectors are actually preventing prohibited materials from contaminating cattle feed. Our recommendations are aimed at ensuring that FDA has a strategy for maximizing the effectiveness of its limited inspection resources, targeting inspections, and using feed tests to minimize the risk of cattle being fed prohibited material. 2. Our concern is precisely that the data are being analyzed and interpreted in an erroneous context. Specifically, when FDA and industry used those data to assert a 99 percent compliance rate with the feed ban, they took that information out of context. While industry compliance may in fact be quite high for firms FDA has inspected recently, FDA’s data are not sufficient to project compliance industrywide. FDA does not know the status of compliance for firms that have never been inspected or have not been reinspected in years. In addition, compliance history is lost—firms that had serious and long- standing violations are classified as “in-compliance” once FDA determines that the problems are corrected. FDA is not reporting that the firms were ever out of compliance or the length of time that the feed ban was violated. Lastly, inspection results are largely based on a review of paper documents and a visual inspection, with little or no feed testing. Given these data concerns and compliance unknowns, we believe that FDA’s data should not be used to project industry compliance and, anytime those data are cited, they should be reported in a complete and accurate context. 3. FDA agrees that there are industry sectors (such as transporter and animal feed salvagers) that need to be assessed to determine their potential risk to U.S. cattle. In fact, FDA acknowledges that there are millions of firms potentially subject to the feed-ban rule. At the same time, FDA implies that it has identified all high-risk firms. FDA has no basis for that assertion. The example we suggest in this report is one way of identifying additional firms that we observed during our review. FDA identified other approaches that its districts used to identify other firms. We believe that any approaches FDA identifies as useful should be applied uniformly across all FDA districts. We included information in the report on how FDA plans to use the $8.3 million it received in the 2005 budget. We also revised the report to include FDA’s estimate of the number of firms that feed cattle and other ruminants and revised the recommendation in recognition that it may be impossible for FDA to identify all firms subject to the feed-ban rule. 4. FDA suggests that requiring notification would take significant resources. The cost of the notification program will depend on the requirements FDA puts in place. In developing the program, FDA could target the notification to firms that pose potentially high-risk for exposing cattle feed to prohibited material. According to FDA, of the 14,800 firms it has inspected, about 570 renderers, protein blenders, and feed mills comprise the high-risk firms subject to notification because they manufacture or process prohibited material. While we believe there may be more firms that fall into this group, it should not be a significantly larger number. If it is significantly larger, that is something FDA needs to know. Furthermore, requiring industry to self- report is another mechanism that would help FDA identify firms and oversee compliance. Finally, FDA has registration requirements in place for medicated feed firms and for food facilities, and could draw on its experience with those programs for developing a notification program for firms subject to the feed-ban rule. Because firms can change their practices over time, we believe it is important that firms notify FDA whenever such changes occur. 5. While we agree that the current test methods have certain limitations, we believe that testing can be a valuable tool for helping FDA oversee compliance with the feed ban. FDA maintains that, because the current test methods cannot differentiate prohibited material from exempt material, they cannot be used to verify the presence or absence of prohibited material or to confirm the adequacy of cleanout measures. However, states told us that they are using tests for these purposes. Moreover, FDA is currently testing finished feed and using the test results, together with follow-up inspections, to determine whether the feed ban had been violated. We believe tests would help inspectors who now rely on only paperwork review and visual examination to determine the adequacy of cleanout procedures. Tests would also be useful for vegetable-based cattle feed, where detecting the presence of animal protein would indicate a violation. We revised the recommendation to recognize that FDA may elect to use other test methods in addition to feed microscopy and polymerase chain reaction (PCR). With respect to FDA’s sampling of finished feed, the 660 samples FDA tested were not collected during feed-ban compliance inspections. We plan to report later this year on FDA’s sampling of finished feed. 6. We agree that FDA’s current test methodology will not allow it to use test results alone to verify feed-ban violations. However, testing combined with follow-up inspections would allow FDA to be in a better position to generalize about compliance with the feed-ban rule if FDA developed a random sample methodology for inspectors to use for sampling finished feed and feed ingredients. (Also see comment 5.) 7. After clarifying FDA’s comment with an attorney in FDA’s Office of the Chief Counsel, we revised the report and the recommendation to delete references that FDA should encourage firms to include a cautionary statement on feed exports that may contain prohibited material. We believe that it would be more prudent for FDA to focus its efforts on obtaining statutory authority to require that the cautionary statement be used on such exports. 8. We revised the recommendation to clarify that FDA should be alerting USDA and the affected states whenever inspectors discover that cattle may have consumed feed with prohibited material. 9. Based on the inspections we observed and the 404 inspection reports that we reviewed in detail, we believe that inspector activities during feed-ban compliance inspections are driven by the checklist items/questions on the BSE inspection form. Therefore, we believe the checklist should include specific questions to prompt inspectors to examine vehicles and firms’ cleanout procedures on every inspection. 10. As noted in the report, FDA believes that it provides the inspection results in a transparent, compete, and accurate context. FDA notes that the BSE inspection data posted on its Web site “allows the user to analyze the data, in a multitude of ways, to provide their own contextual reference.” Our concern is precisely that the data are being analyzed and interpreted in an erroneous context. Specifically, when FDA and industry used those data to assert a 99 percent compliance rate with the feed ban, they took that information out of context. While FDA’s calculation of compliance by a subset of regulated industries may in fact be quite high, FDA’s data are not sufficient to make that projection for all regulated industries because of the many problems we cite in the report. Specifically, FDA does not know the status of compliance for firms that have never been inspected or those that have not been reinspected in years. FDA also does not know if a firm that it previously inspected and classified as low-risk has started using prohibited material; and FDA reclassifies a firm in the database from “out-of-compliance” to “in-compliance” when it corrects a violation— even when the violation was serious and long-standing. Lastly, inspection results are largely based on a review of paper documents and a visual inspection, with little or no feed testing. Given these data concerns and compliance unknowns, FDA’s data should not be used to project industry compliance and, anytime those data are cited, they should be reported in a complete and accurate context. In addition to the individuals named above, Vincent Balloon, Jim Dishmon, Natalie Herzog, Lynn Musser and John C. Smith made key contributions. Other contributors included George Quinn, Carol Herrnstadt Shulman, Joan Vogel and Amy Webbink.
|
More than 5 million cattle across Europe have been killed to stop the spread of bovine spongiform encephalopathy (BSE), commonly called mad cow disease. Found in 26 countries, including Canada and the United States, BSE is believed to spread through animal feed that contains protein from BSE-infected animals. Consuming meat from infected cattle has also been linked to the deaths of about 150 people worldwide. In 1997, the Food and Drug Administration (FDA) issued a feed-ban rule prohibiting certain animal protein (prohibited material) in feed for cattle and other ruminant animals. FDA and 38 states inspect firms in the feed industry to enforce this critical firewall against BSE. In 2002, GAO reported a number of weaknesses in FDA's enforcement of the feed ban and recommended corrective actions. This report looks at FDA's efforts since 2002 to ensure industry compliance with the feed ban and protect U.S. cattle. FDA has made needed improvements to its management and oversight of the feed-ban rule in response to GAO's 2002 report, but program weaknesses continue to limit the effectiveness of the ban and place U.S. cattle at risk of spreading BSE. Improvements made include FDA establishing a uniform method of conducting compliance inspections and training FDA inspectors, as well as state inspectors who carry out inspections under agreements with FDA, on the new method. FDA also implemented new data-entry procedures that are designed to more reliably track feed-ban inspection results. Consequently, FDA has a better management tool for overseeing compliance with the feed-ban rule and a data system that better conforms to standard database management practices. However, various program weaknesses continue to undermine the nation's firewall against BSE. FDA acknowledges that there are more feed manufacturers and transporters, on-farm mixers, and other feed industry businesses that are subject to the feed ban than the approximately 14,800 firms inspected to date; however, it has no uniform approach for identifying additional firms. FDA has not reinspected approximately 2,800, or about 19 percent, of those businesses, in 5 or more years; several hundred are potentially high risk. FDA does not know whether those businesses now use prohibited material in their feed. FDA's feed-ban inspection guidance does not include instructions to routinely sample cattle feed to test for potentially prohibited material as part of the compliance inspection. Instead, it includes guidance for inspectors to visually examine facilities and equipment and review invoices and other documents. Feed intended for export is not required to carry a caution label "Do not feed to cattle or other ruminants," when the label would be required if the feed were sold domestically. Without that statement, feed containing prohibited material could be inadvertently or intentionally diverted back to U.S. cattle or given to foreign cattle. FDA has not always alerted USDA and states when it learned that cattle may have been given feed that contained prohibited material. This lapse has been occurring even though FDA's guidance calls for such communication. Although research suggests that cattle can get BSE from ingesting even a small amount of infected material, inspectors do not routinely inspect or review cleanout procedures for vehicles used to haul cattle feed.
|
Our undercover testing for the 2015 coverage year found that the health- care marketplace eligibility determination and enrollment process remains vulnerable to fraud. As shown in figure 1, the federal Marketplace or selected state marketplaces approved each of our 10 fictitious applications for subsidized qualified health plans. We subsequently paid premiums to put these policies into force. As the figure shows, for these 10 applications, we were approved for subsidized coverage—the premium tax credit, paid in advance, and cost- sharing reduction subsidies—for all cases. The monthly amount of the advance premium tax credit for these 10 applicants totaled approximately $2,300 per month, or about $28,000 annually, equal to about 70 percent of total premiums. For 4 of these applications, we used Social Security numbers that could not have been issued by the Social Security Administration. For 4 other applications, we said our fictitious applicants worked at a company—which we also created—that offered health insurance, but the coverage did not provide required minimum essential coverage under PPACA. For the final 2 applications, we used an identity from our prior undercover testing of the federal Marketplace to apply for coverage concurrently at two state marketplaces. Thus, this fictitious applicant received subsidized qualified health-plan coverage from the federal Marketplace and the two selected state marketplaces at the same time. For 8 applications among this group of 10, we failed to clear an identity- checking step during the “front end” of the application process, and thus could not complete the process. In these cases, we were directed to contact a contractor that handles identity checking. The contractor was unable to resolve the identity issues and directed us to call the appropriate marketplace. We proceeded to phone the marketplaces and our applications were subsequently approved. The other two applicants were accepted by phone. For each of the 10 undercover applications where we obtained qualified health-plan coverage, the respective marketplace directed that our applicants submit supplementary documentation. The marketplaces are required to seek postapproval documentation in the case of certain application “inconsistencies”—instances in which information an applicant has provided does not match information contained in data sources that the marketplace uses for eligibility verification at the time of application, or such information is not available. If there is an application inconsistency, the marketplace is to determine eligibility using the applicant’s attestations and ensure that subsidies are provided on behalf of the applicant, if qualified to receive them, while the inconsistency is being resolved using “back-end” controls. Under these controls, applicants will be asked to provide additional information or documentation for the marketplaces to review in order to resolve the inconsistency. As part of our testing, and to respond to the marketplace directives, we provided counterfeit follow-up documentation, such as fictitious Social Security cards with impossible Social Security numbers, for all 10 undercover applications. For all 10 of these undercover applications, we maintained subsidized coverage beyond the period during which applicants may file supporting documentation to resolve inconsistencies. In one case, the Kentucky marketplace questioned the validity of the Social Security number our applicant provided, which was an impossible Social Security number. In fact, the marketplace told us the Social Security Administration reported that the number was not valid. Despite this, however, the Kentucky marketplace notified our fictitious applicant that the applicant was found eligible for coverage. For the four fictitious applicants who claimed their employer did not provide minimum essential coverage, the marketplace did not contact our fictitious employer to confirm the applicant’s account that the company offers only substandard coverage. In August 2015, we briefed CMS and California and Kentucky state officials on the results of our undercover testing, to obtain their views. According to these officials, the marketplaces only inspect for documents that have obviously been altered. Thus, if the documentation submitted does not appear to have any obvious alterations, it would not be questioned for authenticity. In addition, according to Kentucky officials, in the case of the impossible Social Security number, the identity-proofing process functioned correctly, but a marketplace worker bypassed identity- proofing steps that would have required a manual verification of the fictitious Social Security card we submitted. The officials told us they plan to provide training on how to conduct manual verifications to prevent this in the future. As for our employer-sponsored coverage testing, CMS and California officials told us that during the 2015 enrollment period, the marketplaces accepted applicants’ attestation on lack of minimum essential coverage. As a result, the marketplaces were not required to communicate with the applicant’s employer to confirm whether the attestation is valid. Kentucky officials told us that applicant-provided information is entered into its system to determine whether the applicant’s claimed plan meets minimum essential coverage standards. If an applicant receives a qualified health-plan subsidy because the applicant’s employer- sponsored plan does not meet the guidelines, the Kentucky marketplace sends a notice to the employer asking it to verify the applicant information. The officials told us the employer letter details, among other things, the applicant-provided information and minimum essential coverage standards. However, our fictitious company did not receive such notification. CMS, California, and Kentucky officials also told us there is no current process to identify individuals with multiple enrollments through different marketplaces. CMS officials told us it was unlikely an individual would seek to obtain subsidized qualified health-plan coverage in multiple states. We conducted this portion of our testing, however, to evaluate whether such a situation, such as a stolen identity, would be possible. CMS officials told us the agency would need to look at the risk associated with multiple coverage. Kentucky officials told us that in response to our findings, call center staff have been retrained on identity-proofing processes, and that they are improving training for other staff as well. They also said they plan changes before the next open-enrollment period so that call center representatives cannot bypass identity-proofing steps, as occurred with our applications. Further, they said they plan to improve the process for handling of applications where employer-sponsored coverage is at issue. Also in response to our findings, California officials said they are developing process improvements and system modifications to address the issues we raised, and would share details later. Finally, in the case of the federal Marketplace in particular, for which, as noted, we conducted undercover testing previously, we asked CMS officials for their views on our second-year results compared to the first year. They told us the eligibility and enrollment system is generally performing as designed. According to the officials, a key feature of the system, when applicant information cannot immediately be verified, is whether proper inconsistencies are generated, in order that they can be addressed later, after eligibility is granted at time of application. Earlier, CMS officials told us the overall approach is that CMS must balance consumers’ ability to effectively and efficiently select Marketplace coverage with program-integrity concerns. In addition to our applications for subsidized private health plans, we also made eight additional fictitious applications for Medicaid coverage in order to test the ability to apply for that program through the marketplaces. As shown in figure 2, in these tests, we were approved for subsidized health-care coverage for seven of the eight applications. For three of the eight applications, we were approved for Medicaid, as originally sought. For four of the eight applications, we did not obtain Medicaid approval, but instead were subsequently approved for subsidized qualified health-plan coverage. The monthly amount of the advance premium tax credit for these four applicants totaled approximately $1,100 per month, or about $13,000 annually. For one of the eight applications, we could not obtain Medicaid coverage because we declined to provide a Social Security number. As with our applications for qualified health plans described earlier, we also failed to clear an identity-checking step for six of eight Medicaid applications. In these cases, we were likewise directed to contact a contractor that handles identity checking. The contractor was unable to resolve the identity issues and directed us to call the appropriate marketplace. We proceeded to phone the marketplaces. However, as shown in figure 2, the California marketplace did not continue to process one of our Medicaid applications. In this case, our fictitious phone applicant declined to provide what was a valid Social Security number, citing privacy concerns. A marketplace representative told us that, to apply, the applicant must provide a Social Security number. The representative suggested that as an alternative, we could apply for Medicaid in person with the local county office or a certified enrollment counselor. After we discussed the results of our undercover testing with California officials, they told us their system requires applicants to provide either a Social Security number or an individual taxpayer-identification number to process an application. As a result, because our fictitious applicant declined to provide a Social Security number, our application could not be processed. For the four Medicaid applications submitted to the federal Marketplace, we were told that we may be eligible for Medicaid but that the respective Medicaid state offices might require more information. For three of the four applications, federal Marketplace representatives told us we would be contacted by the Medicaid state offices within 30 days. However, the Medicaid offices did not notify us within 30 days for any of the applications. As a result, we subsequently contacted the state Medicaid offices and the federal Marketplace to follow up on the status of our applications. For the two New Jersey Medicaid applications, we periodically called the state Medicaid offices over approximately 4 months, attempting to determine the status of our applications. In these calls, New Jersey representatives generally told us they had not yet received Medicaid information from the federal Marketplace and, on several occasions, said they expected to receive it shortly. After our calls to New Jersey Medicaid offices, we phoned the federal Marketplace to determine the status of our Medicaid applications. In one case, the federal Marketplace representative told us New Jersey determined that our applicant did not qualify for Medicaid. As a result, the phone representative stated that we were then eligible for qualified health-plan coverage. We subsequently applied for coverage and were approved for an advance premium tax credit plus the cost- sharing reduction subsidy. In the other case, the federal Marketplace representative told us the Marketplace system did not indicate whether New Jersey received the application or processed it. The representative advised we phone the New Jersey Medicaid agency. Later on that same day, we phoned the federal Marketplace again and falsely claimed that the New Jersey Medicaid office denied our Medicaid application. Based on this claim, the representative said we were eligible for qualified health-plan coverage. We subsequently applied for coverage and were approved for an advance premium tax credit plus the cost-sharing reduction subsidy. The federal Marketplace did not ask us to submit documentation substantiating our Medicaid denial from New Jersey. We asked to meet with New Jersey Medicaid officials to discuss the results of our testing, but they declined our request. CMS officials told us that New Jersey had system issues that may have accounted for problems in our Medicaid application information being sent to the state. CMS officials told us that this system issue is now resolved. In addition, CMS officials told us they do not require proof of a Medicaid denial when processing qualified health-plan applications; nor does the federal Marketplace verify the Medicaid denial with the state. CMS officials said that instead, they accept the applicant’s attestation that the applicant was denied Medicaid coverage. For our North Dakota Medicaid application in which we did not provide a Social Security number but did provide an impossible immigration document number, we called the North Dakota Medicaid agency to determine the status of our application. An agency representative told us the federal Marketplace denied our Medicaid application and therefore did not forward the Medicaid application file to North Dakota for a Medicaid eligibility determination. We did not receive notification of denial from the federal Marketplace. Subsequently, we called the federal Marketplace and applied for subsidized qualified health-plan coverage. The federal Marketplace approved the application, granting an advance premium tax credit plus the cost-sharing reduction subsidy. Because we did not disclose the specific identities of our fictitious applicants, CMS officials could not explain why the federal Marketplace originally said our application may be eligible for Medicaid but subsequently notified North Dakota that it was denied. For the North Dakota Medicaid application for which we did not provide a valid Social Security identity, we received a letter from the state Medicaid agency about a month after we applied through the federal Marketplace. The letter requested that we provide documentation to prove citizenship, such as a birth certificate. In addition, it requested a Social Security card and income documentation. We submitted the requested documentation, such as a fictitious birth certificate and Social Security card. The North Dakota Medicaid agency subsequently approved our Medicaid application and enrolled us in a Medicaid plan. After our undercover testing, we briefed North Dakota Medicaid officials and obtained their views. They told us the agency likely approved the Medicaid application because our fake Social Security card would have cleared the Social Security number inconsistency. The officials told us they accept documentation that appears authentic. They also said the agency is planning to implement a new system to help identify when applicant-reported information does not match Social Security Administration records. As with our applications for coverage under qualified health plans, described earlier, the state marketplace for Kentucky directed two of our Medicaid applicants to submit supplementary documentation. As part of our testing and in response to such requests, we provided counterfeit follow-up documentation, such as a fake immigration card with an impossible numbering scheme for these applicants. The results of the documentation submission are as follows: For the application where the fictitious identity did not match Social Security records, the Kentucky agency approved our application for Medicaid coverage. In our discussions with Kentucky officials, they told us they accept documentation submitted—for example copies of Social Security cards—unless there are obvious alterations. For the Medicaid application without a Social Security number and with an impossible immigration number, the Kentucky state agency denied our Medicaid application. A Kentucky representative told us the reason for the denial was that our fictitious applicant had not been a resident for 5 years, according to our fictitious immigration card. The representative told us we were eligible for qualified health-plan coverage. We applied for such coverage and were approved for an advance premium tax credit and the cost-sharing reduction subsidy. In later discussions with Kentucky officials, they told us the representative made use of an override capability, likely based on what the officials described as a history of inaccurate applicant immigration status information for a refugee population. Kentucky officials also said their staff accept documentation submitted unless there are obvious alterations, and thus are not trained to identify impossible immigration numbers. Finally, Kentucky officials said they would like to have a contact at the Department of Homeland Security with whom they can work to resolve immigration-related inconsistencies, similar to a contact that they have at the Social Security Administration to resolve Social Security-related inconsistencies. By contrast, during the Medicaid application process for one applicant, California did not direct that we submit any documentation. In this case, our fictitious applicant was approved over the phone even though the fictitious identity did not match Social Security records. We shared this result with California officials, who said they could not comment on the specifics of our case without knowing details of our undercover application. As noted earlier, the findings discussed in this statement are preliminary, and we plan to issue a final report later, upon completion of our work. Chairman Pitts, Ranking Member Green, and Members of the subcommittee, this concludes my statement. I look forward to the subcommittee’s questions. For questions about this statement, please contact Seto Bagdoyan at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Matthew Valenta and Gary Bianchi, Assistant Directors; Maurice Belding, Jr.; Mariana Calderón; Ranya Elias; Suellen Foth; Maria McMullen; James Murphy; George Ogilvie; Ramon Rodriguez; Christopher H. Schmitt; Julie Spetz; and Elizabeth Wood. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
PPACA provides for the establishment of health-insurance marketplaces where consumers can, among other things, select private health-insurance plans or apply for Medicaid. The Congressional Budget Office estimates the cost of subsidies and related spending under PPACA at $60 billion for fiscal year 2016. PPACA requires verification of applicant information to determine enrollment or subsidy eligibility. In addition, PPACA provided for the expansion of the Medicaid program. GAO was asked to examine application and enrollment controls for the marketplaces and Medicaid. This testimony provides preliminary results of undercover testing of the federal and selected state marketplaces during the 2015 open-enrollment period, for both private health-care plans and Medicaid. GAO submitted, or attempted to submit, 18 fictitious applications by telephone and online, 10 of which tested controls related to obtaining subsidized health-plan coverage available through the federal Marketplace in New Jersey and North Dakota, and through state marketplaces in California and Kentucky. GAO chose these four states based partly on a range of population sizes and whether the state had expanded Medicaid eligibility under terms of the act. The other 8 applications, among the 18 GAO made, tested marketplace and state controls under the marketplace system for determining Medicaid eligibility in these four states. The undercover results, while illustrative, cannot be generalized to the full population of enrollees. GAO discussed the results of its testing with CMS and state officials to obtain their perspectives. Under the Patient Protection and Affordable Care Act (PPACA), health-insurance marketplaces are required to verify application information to determine eligibility for enrollment and, if applicable, determine eligibility for income-based subsidies or Medicaid. These verification steps include reviewing and validating information about an applicant's Social Security number, if one is provided; citizenship, status as a national or lawful presence; and household income and family size. For 10 fictitious applicants, GAO tested application and enrollment controls for obtaining subsidized health plans available through the federal Health Insurance Marketplace (Marketplace) (for New Jersey and North Dakota) and two selected state marketplaces (California and Kentucky). Although 8 of these 10 fictitious applications failed the initial identity-checking process, all 10 were subsequently approved by the federal Marketplace or the selected state marketplaces. Four applications used Social Security numbers that, according to the Social Security Administration (SSA), have never been issued, such as numbers starting with “000.” Other applicants had duplicate enrollment or claimed their employer did not provide insurance that meets minimum essential coverage. For 8 additional fictitious applicants, GAO tested enrollment into Medicaid through the same federal Marketplace and the two selected state marketplaces, and was able to obtain either Medicaid or alternative subsidized coverage for 7 of the 8 applicants. Specifically: Three were approved for Medicaid, which was the health-care program for which GAO originally sought approval. In each case, GAO provided identity information that would not have matched SSA records. For two applications, the marketplace directed the fictitious applicants to submit supporting documents, which GAO did (such as a fake immigration card), and the applications were approved. For the third, the marketplace did not seek supporting documentation, and the application was approved by phone. For four, GAO did not obtain approval for Medicaid; however, GAO was subsequently able to gain approval of subsidized health plans based on the inability to obtain Medicaid coverage. In 1 case, GAO falsely claimed that it was denied Medicaid in order to obtain the subsidized health plan when in fact no Medicaid determination had been made by the state at that time. For one, GAO was unable to enroll into Medicaid, in California, because GAO declined to provide a Social Security number. According to California officials, the state marketplace requires a Social Security number or taxpayer-identification number to process applications. According to officials from the Centers for Medicaid & Medicare Services (CMS), California, Kentucky, and North Dakota, the marketplaces and Medicaid offices only inspect for supporting documentation that has obviously been altered. Thus, if the documentation submitted does not show such signs, it would not be questioned for authenticity. GAO's work is continuing, and GAO plans to issue a final report at a later date.
|
While a large number of tax software companies offer return preparation and electronic filing services, three companies provide the tax software used by the majority of individuals who prepare and file their returns electronically (see app. II). One company’s product—Intuit’s TurboTax— represented over half of the returns filed electronically by individual taxpayers. These and other tax software companies generally offer several versions of retail, online, and downloadable software packages that taxpayers can use to prepare federal and state tax returns. They generally charge less for versions that are designed to handle simple tax returns and charge more for versions that can prepare more complicated returns such as those dealing with business expenses. In 2008, the three companies also employed two basic pricing strategies. One strategy was to charge separate, incremental fees for federal return preparation, state return preparation, and electronic filing. For example, in 2008, one company charged about $40 for federal return preparation, with incremental fees of about $20 for electronic filing. The other pricing strategy used was to bundle several services together—typically return preparation and electronic filing—and charge one price for the bundle. Tax software is one of the three major methods that taxpayers use to prepare their returns. As figure 1 illustrates, over 39 million (or 28 percent) of the approximately 138 million individual income tax returns filed in 2007 were prepared by individuals using tax software. Over 77 million individuals used a paid preparer to prepare returns electronically in 2007, and 71 percent of those returns were also submitted electronically to IRS. The remaining 21 million returns were manually prepared by individuals or their paid preparers. After preparation, taxpayers can either electronically file their return or mail a paper copy to IRS. Figure 1 shows that millions of taxpayers who had a return prepared electronically (either by using tax software or a paid preparer) filed paper copies. Such returns are called “v- coded” because IRS codes such returns with a “v” to process and track them separately from other paper-filed returns. Many of the companies that sell tax software also have partnered with IRS to provide free electronic preparation and filing to eligible taxpayers. Those taxpayers have the option of filing their returns for free using products from the Free File Alliance, LLC (FFA)—a consortium of tax preparation companies that provides online electronic preparation and filing to eligible taxpayers at no charge. Figure 1 includes the approximately 4 million FFA returns filed in 2007 by individuals using commercial software. To help improve paper processing, about half of the state revenue agencies use a bar coding technology to convert data on paper returns to electronic data. Bar coding is less expensive and more accurate than processing paper returns because it eliminates manual transcription but is still more expensive and less efficient than electronic filing. IRS does not use this technology for processing individuals tax returns. Returns filed electronically have significant advantages for IRS and taxpayers compared to paper-filed returns as discussed below and further detailed in appendix III. IRS estimates that processing an electronically filed return costs the agency $0.35 per return while processing a paper return costs $2.87 per return. Using IRS’s current cost estimates based on fiscal year 2005 return data, we estimate IRS would have saved approximately $143 million if the 56.9 million paper returns in 2007 had been filed electronically. Electronically filed returns also have higher accuracy rates than paper- filed returns because tax software eliminates transcription and other errors. IRS processes electronically filed returns in less than half the time it takes to process paper returns, facilitating faster refunds. We have previously reported that electronically filed returns have the potential to improve IRS’s enforcement programs. IRS does not use all tax return information in its automated compliance checking programs because IRS policy is to post the same information from electronic and paper returns, and the cost of transcription prevents IRS from transcribing paper returns in full. IRS officials previously estimated in 2007 that having all tax return information available electronically would result in a $175 million increase in tax revenue annually from at least one of its compliance programs. IRS recently issued the results of the first phase of its Advancing E-file study, which examines tax filing behavior and characteristics and contains potential options to increase electronic filing. We have previously reported that IRS’s ability to achieve efficiencies depends on its continuing ability to increase electronic filing. We recently suggested that Congress mandate that paid tax return preparers use electronic filing and that IRS require software companies to include bar codes on individual paper returns. IRS agreed to study the latter option. IRS has responsibility for enforcing tax laws in the Internal Revenue Code (IRC). In addition, IRC section 6011 provides specific authority for IRS to prescribe forms and regulations for tax returns, including the information required on those returns and whether they must be filed electronically. The IRC imposes civil and criminal penalties on paid tax return preparers, which include tax software companies, for unauthorized disclosure or use of a taxpayer’s personal and tax-related information. In addition to tax law penalties, the providers of services for preparing and filing tax returns are subject to the privacy and safeguarding rules created under the Gramm-Leach-Bliley Act (see app. IV). For the 2009 tax filing season, the two largest tax software companies that previously charged separate electronic filing fees for federal returns in some of their retail and downloadable products have eliminated those electronic filing fees. Moreover, the three largest companies will bundle federal tax preparation with electronic filing for all of their products (see app. II). However, for some products, the companies will still charge separate, incremental fees for other services such as state return preparation, state electronic filing, and return review by a tax professional. According to industry representatives, IRS officials suggested they eliminate separate federal filing fees to encourage electronic filing. However, the effect of these changes on electronic filing will not begin to be known until the end of the present tax filing period and will be difficult to determine. On one hand, taxpayers who buy a tax software package that includes a bundle of services may be encouraged to use software and file electronically because there is no longer a separate charge for doing so. On the other hand, if the cost of such a package is significantly higher, it may discourage taxpayers’ use of tax software since they may not be able to purchase a less expensive package that does not include electronic filing. The two largest tax software companies that eliminated federal electronic filing fees also made some other pricing changes for preparing and electronically filing both federal and state tax returns in 2009 including: online tax packages are generally priced lower than in 2008; online tax packages are generally priced lower than most retail/downloadable packages remained essentially the same in price when compared to 2008. For the third largest tax software company, its package prices for both online and retail/downloadable products remained the same in 2009 as in 2008 because the preparation and electronic filing fees remained the same in both years. See appendix II for more details. Another change in 2009 is that IRS and FFA have agreed to provide a fillable version of federal tax forms. These fillable tax forms, which taxpayers can complete online and file electronically, will provide a basic calculator function but will not provide the question-and-answer format similar to commercial tax software. The forms will be accessible for free to all taxpayers via IRS’s Web site and are in addition to FFA’s current free products for eligible taxpayers described in the background of this report. As part of the upcoming second phase of its Advancing E-file study, IRS plans further surveys to obtain taxpayers’ views on electronic filing. However, it does not plan to include questions, for example, about the effect of 2009 pricing changes on taxpayers’ willingness to file electronically. Currently, IRS has little such information. For example, IRS and the Oversight Board surveys to date have not addressed how a separate charge for electronic filing affects taxpayers’ willingness to file electronically. With the 2009 changes, however, IRS has an opportunity to directly measure the effect of eliminating separate fees to file federal tax returns electronically, making changes to software pricing overall, and making electronic tax forms available so that all taxpayers can complete and file for free online. We recognize that such a direct study would not be simple to conduct because, for example, it may be difficult to isolate the effect of multiple price changes and factors other than price, such as accuracy and security, which also affect taxpayers’ willingness to file electronically. Further, prior year data are limited. However, even limited information about how taxpayers’ electronic filing behavior changes after price changes would give IRS an empirical basis for supporting the continued elimination of separate fees for electronic filing and other pricing changes as well as complementing surveys of taxpayers’ views. Ideally, to study the effect of pricing on electronic filing rates, IRS would need to know the software package and version used by each taxpayer in order to know the approximate price paid. Currently, IRS requires a software identification number on electronically filed returns, which does not identify the specific software package or version used to prepare those returns. IRS does not require any type of software identification number on v-coded returns (returns prepared using software but filed on paper). Having a more complete software identification number would not only allow IRS to better target its research but also its enforcement activities and efforts to increase use of tax software and electronic filing. Officials from one software company told us that such a change could be easily made by their company at a relatively low cost. In its Advancing E-file study, IRS reported that one of the most important factors influencing taxpayers’ use of tax software is its ability to accurately apply tax laws. IRS requires tax software to pass its Participants Acceptance Testing System (PATS), which includes verifying that computations are correct, tax rate schedules are updated, and returns transmitted electronically are compatible with IRS systems. However, PATS does not go further in testing to determine, for example, whether the guidance tax software provides is sufficient in helping taxpayers prepare accurate tax returns. IRS developed a National Account Manager (NAM) position in 2000 to serve as the main communication channel between the tax software industry and IRS. NAMs communicate in regularly scheduled conference calls with tax software companies about issues of mutual interest including tax law changes, updates to IRS forms and publications, and the upcoming tax filing season. Software companies also contact the NAMs when they encounter technical issues such as a disruption to electronic filing. IRS also works with tax software industry groups and advisory councils, such as the Council for Electronic Revenue Communication Advancement, on annual updates to tax laws and procedures (see app. V). IRS monitors acceptance rates for electronically transmitted returns, including the reasons for rejected returns, throughout the tax filing season and provides a “report card” to software companies at the end of each filing season. Rejected returns are sent back to the taxpayer for correction and resubmission. IRS’s monitoring efforts allow the agency and software companies to identify and resolve problems with electronically filed returns. For example, in 2008, IRS asked tax software companies to hold returns with the Alternative Minimum Tax until IRS was able to process them. Through its monitoring efforts, IRS officials identified companies that were transmitting those types of returns which IRS then rejected. IRS sent notices to these companies, which reduced the number of rejected returns. IRS has worked with the tax software industry on an ad hoc basis to clarify the guidance provided by tax software. For example, for 2009: IRS is working with software companies to ensure their packages make users enter a “yes” or “no” response to questions about having a foreign bank account and signature authority. Prior to this change, some companies’ software defaulted to a “no” response. Another example involving commercial software used by paid preparers rather than individual taxpayers shows that IRS can work with the software companies to influence and improve guidance: IRS’s Earned Income Tax Credit (EITC) office worked with a group of tax software developers to ensure software used by paid preparers eliminated default answers where taxpayers’ answers are critical to return EITC accuracy, and incorporated a “note” capability in the tax software enabling the preparer to record additional inquiries and taxpayer responses. IRS officials, however, acknowledged that these efforts were not the result of a comprehensive and systematic approach to improving the guidance provided by software. IRS does not have plans to review tax software to see if the guidance it provides to taxpayers is sufficient in helping them prepare accurate returns, in part because IRS relies on the extensive scenario and other testing done by the industry as discussed in the next section. As a result, IRS does not know if it is missing opportunities to improve tax software guidance to better ensure compliance. As an example of such an opportunity, we recently recommended that IRS expand outreach efforts to external stakeholders, including software providers, as part of an effort to reduce common types of misreporting related to rental real estate. IRS agreed with these and most of the recommendations in that report and outlined the actions it plans to take to address those recommendations. IRS has provided limited oversight of the software industry’s efforts to ensure that taxpayer information is secure. Taxpayers who file their returns on their home computers using online, retail, or downloadable tax software products are sending their returns to authorized electronic filing providers. IRS does not have the capability to receive electronic returns directly from individual taxpayers. Only IRS-authorized electronic filing providers, including Electronic Return Originators (ERO) and software companies, among others, can transmit tax returns electronically to IRS. According to TIGTA, EROs were responsible for the majority of electronically filed tax returns accepted by IRS in 2007. IRS regulates authorized electronic filing providers by conducting suitability checks of applicants during the application screening process, including checks of the applicants’ criminal backgrounds, credit histories, and tax compliance. Once approved, authorized electronic filing providers are subject to IRS monitoring visits, which are conducted to ensure that the providers are meeting requirements such as ensuring security systems are in place to prevent unauthorized access to taxpayer data. However, in 2007, TIGTA identified deficiencies in IRS’s monitoring program. For example, IRS did not suspend electronic filing providers who were in violation of program requirements even though they had been issued notifications of suspension. In response, IRS added a new control procedure, effective January 30, 2008, to better track suspension cases. IRS has also established security and privacy requirements that apply to FFA members. For example, according to IRS officials, FFA members must adhere to the Payment Card Industry (PCI) standards and third-party security and privacy certifications, and use PCI-approved companies to conduct penetration and vulnerability testing. IRS has a Memorandum of Understanding (MOU) with FFA requiring members to provide IRS with documentation demonstrating compliance with security standards. However, IRS does not fully monitor compliance with existing FFA security and privacy requirements. Although IRS receives FFA security reports, it does not actively review or validate those reports unless a problem, such as a security incident, is reported. For 2009, IRS is suggesting that all authorized electronic filing providers that participate in online filing adhere to new security and privacy standards, the majority of which are similar to existing FFA requirements; however, IRS is not requiring compliance with those standards (see app. VI). These standards are optional in 2009 because IRS finalized them late in 2008. IRS has no plans to determine if tax software companies that are authorized electronic filing providers participating in online filing are adhering to advisory security and privacy standards for the 2009 filing season. Because the new standards would apply to a relatively few number of companies and include the three largest, the costs to collect information on adherence to the standards would be low. For the 2010 filing season, IRS may make those standards mandatory. Also, IRS is considering expanding these standards to include software companies that offer retail and downloadable products but has not yet established a time frame for doing so. IRS officials stated they are considering developing a plan to monitor compliance with these security and privacy standards for 2010. Without appropriate monitoring, IRS has limited assurance that the standards have been adequately implemented or software companies are complying with the standards. As a result, IRS does not know whether the confidentiality and integrity of the taxpayers’ data are at an increased risk of being inadequately protected against fraud and identity theft. Tax software companies have been reliable providers of electronic filing services, with one recent exception which did not have a significant effect on tax administration. In 2007, customers of some of Intuit’s products experienced a disruption in their ability to file electronically on tax day. For approximately 13 hours, taxpayers could not reliably file their returns electronically through Intuit to IRS. According to IRS, about 171,000 tax returns were affected. IRS accommodated affected taxpayers by extending the tax filing deadline and not applying late filing penalties. IRS reported that the disruption did not delay processing of tax returns, payments to the government, or refunds to taxpayers because IRS already had a processing backlog of millions of returns at that time. Intuit agreed to pay any other penalties that customers incurred and also refunded any electronic filing fees charged during the disruption. IRS’s MOU with FFA requires the latter’s members to maintain a continual level of service throughout the filing season. For example, members are not permitted to schedule any planned blackouts of service during that time. However, IRS does not monitor compliance with this requirement and does not have a similar requirement for non-FFA tax software companies. Additionally, while IRS’s PATS testing reviews tax software to ensure that returns transmitted electronically are compatible with IRS systems before the start of the filing season, it does not do so throughout the filing season. All industry representatives we spoke with believed that testing throughout the filing season was important because of the potential effect of late tax law changes. Despite devoting some resources to oversight of the tax software industry, IRS has not conducted an assessment to understand whether reliance on commercial tax software poses any significant risks to tax administration. Broadly defined, risk assessment involves (1) identifying future, potentially negative outcomes and (2) estimating the likelihood they will occur. In IRS’s case, those outcomes include the possibility of security breaches, disruptions in electronic filing, and missed opportunities to identify and correct compliance problems. While the likelihood of these outcomes occurring may be low, IRS does not know whether this is the case. OMB’s and our guidance suggest that agencies conduct risk assessments to identify risks that could impede the efficient and effective achievement of their goals and allow managers to identify the most significant areas in which to place or enhance internal controls. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that these policies and controls operate as intended. Further, federal law requires agencies to implement an information security program that includes periodic assessments of risk. According to IRS officials, the agency has not conducted a risk assessment because it does not believe the benefits warrant the cost of such an assessment. IRS and software industry officials we spoke with believe it is in the industry’s financial interest to ensure that taxpayers can rely on tax software. In their annual filing reports, both Intuit and H&R Block identified financial losses and harm to their reputation as potential risks of system failures or interruptions. For example, Intuit reported one of the many risks to its company is that the interruption or failure of its information and communication systems could cause customers to revert to paper filings, resulting in reduced company revenues. In addition, according to IRS officials and tax software industry representatives, the industry has not yet experienced a significant problem with tax software or electronic filing. IRS and tax software industry officials further stated that the industry is better suited to conduct extensive scenario and security testing because of the significant cost of conducting such testing. Software industry officials reported spending tens of millions of dollars each year on testing to ensure accuracy. Further, they reported employing hundreds of tax analysts to review and simplify IRS instructions, publications, and forms; monitor proposed changes to tax legislation; and consult with IRS and state revenue agencies to ensure accurate interpretations of tax laws. Intuit officials reported complying with recognized international security standards. Intuit officials also reported undergoing a biennial third-party security assessment, as well as proactively conducting ongoing security application assessments and vulnerability and penetration testing. Industry representatives noted the current public-private partnership between IRS and the software industry provides reliable coverage for electronic filing through redundancy in the market, unlike other countries that offer only a government-sponsored Internet filing option. While the above may be true and financial and other incentives may exist, IRS’s position is not based on an actual, systemic assessment that identifies potential negative outcomes and the likelihood of their occurrence. Further, there are several reasons to believe that the benefits of assessing the risks associated with reliance on commercial tax software are significant. As already noted, IRS has said that it is in the agency’s best interest to ensure that taxpayers can rely on commercial tax software to make electronic filing accurate, easy, and efficient. Continued growth in electronic filing depends on increasing use by individual taxpayers and maintaining their confidence in the accuracy as well as the security and privacy of their tax information, and the reliability of electronic filing. However, IRS does not know whether there are security and privacy risks because it has not monitored existing requirements. While tax software companies have not reported significant security breaches involving taxpayer data either residing on their databases or during electronic transmission to IRS in recent years, cases of lost or stolen data at other taxing authorities illustrate the potential negative outcomes of such a breach. For example, in 2007, Oregon’s Department of Revenue experienced a breach in which electronic files containing confidential taxpayer information may have been compromised by an ex-employee downloading a contaminated file. While tax administration has not been significantly affected by disruptions to electronic filing, as noted previously, on tax day 2007, about 171,000 Intuit customers experienced a 13-hour disruption. During this time, Intuit customers could not reliably file their returns electronically with Intuit, and ultimately to IRS, but this disruption did not significantly affect tax administration. Additionally, Canada and Great Britain recently experienced disruptions with their electronic filing systems (see text box). If enhancements to tax software could produce even small improvements in voluntary compliance by taxpayers, the additional dollars of tax revenue could be substantial. Tens of billions of the $290 billion dollar net tax gap (after IRS’s collection efforts) are associated with sole proprietors and individual owners of rental real estate. We have made several recent recommendations intended to improve the compliance of these taxpayers by enhancing the clarity of tax software which, as we noted, IRS plans to address in most cases. However, IRS has not conducted research on the correlation between tax software and compliance—such as whether and how tax software packages influence compliance. Such research could be enhanced even more by the use of a single software identification number, which would allow IRS to identify the specific software package used by a taxpayer. Although limited testing of hypothetical scenarios by TIGTA and the National Taxpayer Advocate led them to identify possible software weaknesses that might affect compliance, this testing was based on a nonstatistical sample of scenarios and software packages. Because there are millions of potential scenarios and each one is different, it is not possible to generalize from the nonstatistical samples and reach conclusions about the overall effect of tax software on compliance. Furthermore, hypothetical scenarios do not provide evidence about how taxpayers actually use the software or whether taxpayers are actually complying with tax laws. IRS is already devoting resources to oversight of the tax software industry, as described in the previous section. IRS does conduct some testing, has developed the NAM position to communicate with the software industry, and tracks some performance. Also, according to IRS officials, in 2010 IRS plans to devote additional resources to implement new security and privacy requirements and monitor compliance. While significant problems have not occurred to date, without performing a risk assessment—the first step in risk management and mitigation—IRS does not know the potential magnitude or nature of problems or their likelihood of occurring. As a result, IRS does not have an informed basis for making resource allocation decisions, taking steps to mitigate any significant risks, or avoiding costly risk mitigation in areas where the risks are low. Commercial tax software—which is used by tens of millions of taxpayers—is a critical part of the tax administration system and a potential tool for increasing electronic filing. However, IRS does not identify which software packages taxpayers use or have information on the correlation between particular packages and compliance. Further, IRS does not know whether changes to software pricing would be an effective strategy for increasing electronic filing. Nor does IRS have assurance that tax software companies are adequately protecting and securing taxpayer data, another possible influence on taxpayers’ willingness to file electronically. Despite its role in influencing electronic filing and the accuracy of tax returns, IRS has not conducted a risk assessment of taxpayers’ reliance on tax software. Such an assessment could be done alone or as part of a broader study that would include paid preparers. Without a risk assessment, IRS does not know whether its existing investment in oversight of the tax software industry is too great, about right, or needs to be expanded. To help increase electronic filing and allow IRS to better target its efforts, we recommend that the Commissioner of Internal Revenue direct the appropriate officials to take the following six actions: 1. require tax software companies, as soon as practical, to include a software identification number that specifically identifies the software package used to prepare tax returns, which can be used in IRS research efforts; 2. ensure that, as part of the second phase of IRS’s Advancing E-file Study, surveys ask taxpayers the effect of tax software pricing changes and the opportunity to file for free using online tax forms on IRS’s Web site on their decision to either file or not file tax returns electronically; 3. to the extent possible, study the effect of the 2009 pricing changes and the opportunity to file for free using online tax forms on IRS’s Web site on taxpayers’ use of tax software and electronic filing rates; 4. determine if tax software companies that are authorized to participate in online filing are adhering to advisory security and privacy standards for the 2009 filing season; 5. develop and implement a plan for effectively monitoring compliance with recommended security and privacy standards for the 2010 filing season; and 6. assess the extent to which the reliance on tax software creates significant risks to tax administration, particularly in the areas of tax return accuracy, the security and privacy of taxpayer information, and the reliability of electronic filing. The Deputy Commissioner of Internal Revenue provided written comments in a February 19, 2009 letter in which she agreed with all our recommendations and outlined IRS’s actions to address those recommendations (see app. VII). With respect to requiring tax software companies to identify the software package used, IRS plans to require an identification number on paper tax returns created using software. Related to ensuring that Advancing E-file surveys ask taxpayers about the effect of tax software pricing changes, IRS reported those surveys had already been finalized. In its place, IRS will be analyzing monetary disincentives associated with taxpayers’ choice of filing method and plans to study the effect of the pricing changes on taxpayer electronic filing decisions. With respect to ensuring authorized electronic filing providers adhere to the advisory security and privacy standards for the 2009 filing season, IRS reported it plans to sample and observe online providers’ Web sites to determine compliance. If IRS decides to make the standards mandatory, the agency will develop a monitoring and enforcement plan. Finally, to assess risks related to the reliance on tax software, IRS plans to summarize whether and the extent to which the agency is authorized to be involved in aspects of the software industry, including what additional authority it would need to impose changes and sanctions. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; relevant congressional committees; and other interested parties. This report is available at no charge on GAO’s Web site at http://www.gao.gov. For further information regarding this report, please contact James R. White, Director, Strategic Issues, at (202) 512-9110 or [email protected] or Gregory C. Wilshusen, Director, Information Security Issues, at (202) 512- 6244 or [email protected]. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report can be found in appendix VIII. To determine what is known about how pricing strategies affect the use of tax software and electronic filing, we obtained and analyzed the prices for the top three tax software companies for both online and retail or downloaded products for filing seasons 2008 and 2009. These costs did not include any rebates or promotional prices. We limited our data analysis to the top three software companies because they account for 88 percent of all returns filed electronically by individuals and accepted by the Internal Revenue Service (IRS). We also reviewed literature concerning the economics of information goods, including software pricing. Further, we obtained and analyzed findings from IRS’s Taxpayer Satisfaction Studies and reviewed the IRS Oversight Board’s November 2006 Taxpayer Customer Service and Channel Preference Survey to determine why federal taxpayers do not file returns electronically. To determine the extent to which IRS provides oversight of the tax software industry to help ensure tax returns are accurate, we reviewed and summarized IRS’s legal authority to regulate the accuracy and security of commercial tax software. We also obtained and analyzed internal revenue manuals, industry standards, and government guidance and compared them to IRS’s current procedures. We reviewed the Free File Alliance, LLC (FFA) Memorandum of Agreement (MOU) outlining IRS and FFA’s agreements to provide free income tax software to individuals. To determine the extent to which IRS provides oversight of the tax software industry to help ensure that taxpayer information is secure, we interviewed IRS and FFA officials. In addition, we obtained and analyzed IRS’s new electronic filing security and privacy standards, comparing them to industry standards. We also reviewed the FFA MOU to assess the extent to which security and privacy requirements were already in place for FFA members. To determine the extent to which IRS helps ensure electronic filing systems are reliable, we reviewed IRS requirements for electronic return originators, the FFA MOU, and documents and literature describing a significant disruption in electronic filing at Intuit. We also reviewed documents and interviewed Intuit officials to determine the extent of the disruption and corroborated the information they provided during interviews with IRS officials to determine the effect the disruption had on taxpayers and the agency. To determine what is known about the risks of the reliance on commercial tax software used by individuals, we reviewed Office of Management and Budget (OMB) and GAO guidance, including the criteria for assessing risk at an agency as well as industry best practices for risk assessments and internal controls; and interviewed IRS officials to determine what risk assessments IRS had in place. We also reviewed selected tax software companies’ filing statements with the Securities and Exchange Commission to determine if they identified any risks. We also interviewed IRS and software industry officials to determine what steps they took to identify and address risks. We reviewed the Treasury Inspector General for Tax Administration’s (TIGTA) and National Taxpayer Advocate’s (NTA) reports detailing their respective tests of how accurately and consistently tax software applied tax laws. Because the various tax software tests we reviewed were limited to a subset of tax software packages and used a nonstatistical sample of tax scenarios, their results were not generalizable to all types of taxpayers, tax filing situations, tax laws, or the entire tax software industry. We also reviewed literature on the effect of significant electronic filing disruptions in tax software systems in selected other countries. We selected Canada and Great Britain because these were the examples that IRS provided on electronic filing disruptions in other countries. For background purposes, we also used IRS data to compare the cost of processing returns, and obtained and analyzed math error authority data, reject errors, and processing times across the different tax return filing methods. Additionally, for all objectives, we reviewed reports and interviewed officials including those from IRS, NTA, TIGTA, FFA, the Electronic Tax Administration and Advisory Committee, the Federation of Tax Administrators and the IRS Oversight Board. We also interviewed officials from select industry groups such as the Council for Electronic Revenue Communication Advancement, the National Association of Computerized Tax Processors, and selected tax software companies. We visited a major tax software provider’s data center. Our work was done primarily at IRS Headquarters in Washington, D.C. and its division offices in New Carrollton, Maryland, and Atlanta, Georgia. We conducted this performance audit from April 2008 through February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. While a large number of companies offered tax preparation services in 2008, the top three tax software companies electronically filed 88 percent of returns prepared by individual taxpayers (as opposed to the returns prepared and electronically filed by paid preparers). Each of the companies outside the top three held less than 3 percent of the tax software market as measured by the number of electronically filed returns. However, tax software companies also compete with the paid preparer industry as well as manual preparation. Based on a review of pricing literature for software companies, tax software companies, like other software and information technology companies, have low marginal costs and high fixed costs for product development. In such markets, if the price charged to taxpayers is equal to the marginal cost, companies will not be able to cover their average cost of production and cannot stay in business. Therefore, companies in these markets will attempt to recover more of their fixed costs through various forms of price discrimination. Price discrimination can take the form of developing different versions of the product to match the needs of different types of consumers, who are then charged different prices according to their willingness to pay. The literature also suggests that companies in these markets may offer products that consist of several services bundled together— sometimes charging separate prices for each service or charging a single price for different combinations (bundles) of services. The bundling strategy is thought to potentially increase a company’s revenue by attracting consumers who may value particular elements of the bundled product. Tax software companies bundle some or all of the following services or features: federal tax preparation, state tax preparation, electronic filing for federal and state returns, help services and technical support, return printing services, storage of information from prior returns, links to outside providers of relevant information (W-2s), and built-in accuracy checks. Some tax software companies offer only online services to taxpayers, while others offer the option of downloading the program to a home computer or purchasing software from a retail location. The pricing structure may vary depending on whether a taxpayer prepares a return online or purchases a retail or downloadable program (see tables 1 and 2). In 2008, the largest companies offering online preparation products for federal returns usually bundled electronic filing with federal return preparation. However, if the program was downloaded or purchased at a retail location, electronic filing often involved a separate charge. For the 2009 tax filing season, the two largest tax software companies that previously charged separate electronic filing fees for federal returns for some of their products have eliminated those electronic filing fees. The three largest companies will bundle federal tax preparation with electronic filing for all of their products. For some products, the companies will still charge separate, incremental fees for other services such as preparation and electronic filing for state returns, as well as return review. The two largest tax software companies that eliminated federal electronic filing fees also made some other pricing changes for preparing and electronically filing both a federal and state tax return in 2009 including: online tax packages are generally priced lower than 2008; online tax packages are generally priced lower than most retail/downloadable packages remained essentially the same in price when compared to 2008. For the third largest tax software company, its package prices for both online and retail/downloadable products remained the same in 2009 as in 2008 because the preparation and electronic filing fees remained the same in both years. The effect of these pricing changes on electronic filing will not begin to be known until the end of the present tax filing period and will be difficult to determine. On one hand, taxpayers who buy a tax software package that includes a bundle of services may be encouraged to use software and file electronically because there is no longer a separate charge for doing so. On the other hand, if the cost of such a package is significantly higher, it may discourage taxpayers’ use of tax software since they may not be able to purchase a less expensive package that does not include electronic filing. Taxpayers can experience many advantages and disadvantages based on the various methods for preparing and filing federal tax returns. Taxpayers preparing and filing their returns electronically may receive advantages such as reduced time spent on preparing the return and receiving faster refunds. On the other hand, taxpayers who prepare their returns manually may experience disadvantages such as increased transcription errors and slower refunds. Table 3 shows details of the advantages and disadvantages of the different preparation and filing methods. In the Internal Revenue Service Restructuring and Reform Act of 1998, Congress instructed the agency to establish a goal of having 80 percent of all individual income tax returns filed electronically by 2007. While the Internal Revenue Service (IRS) has no legal authority to generally oversee the operations of tax software companies, IRS does have the authority to prescribe the forms and regulations for the making of returns, including the information contained therein and whether forms must be filed electronically. Accordingly, IRS has an interest in ensuring that tax software providers comply with tax laws and security and privacy laws so that taxpayers have confidence in these services and file their tax returns electronically. Under section 6103 of the Internal Revenue Code (IRC), IRS is responsible for safeguarding taxpayer data while in IRS’s control. Section 6103 nondisclosure requirements only apply to IRS and not to private entities that prepare and send tax data to IRS. However, private entities are subject to safeguarding and privacy rules with regard to taxpayer information and can be penalized for improper use and disclosure. The Gramm-Leach-Bliley (GLB) Act requires financial institutions to protect consumers’ personal financial information held by these institutions—including return preparers, data processors, transmitters, affiliates, service providers, and others who are paid to provide services involving preparation and filing of tax returns. For companies in the tax business, the GLB Act delegated rulemaking and enforcement authority to the Federal Trade Commission (FTC). Complying with the GLB Act generally means complying with FTC’s Financial Privacy and Safeguards Rules. The Financial Privacy Rule requires financial institutions to give their customers privacy notices that explain the financial institution’s information collection and sharing practices; the Safeguards Rule requires financial institutions to have a security plan to protect the confidentiality and integrity of personal consumer information. Additionally, paid tax return preparers are subject to both civil and criminal penalties for unauthorized disclosure or use of a taxpayer’s confidential information. Tax return preparers include persons who develop tax software that is used to prepare or file a tax return, as well as any authorized IRS electronic filing provider. Tax return preparers who knowingly or recklessly disclose or use tax return information for a purpose other than preparing a tax return are guilty of a misdemeanor with a maximum penalty of up to 1 year’s imprisonment or a fine of not more than $1,000, or both. Any unauthorized disclosure or use by a tax return preparer not acting in bad faith still subjects that preparer to a civil penalty of $250 for each disclosure, not to exceed $10,000 for the year. A summary of the federal laws protecting taxpayer information are listed in table 4. In an effort to provide more effective tax administration, Internal Revenue Service (IRS) disseminates information and obtains technical perspectives and advice through industry and advisory councils. As shown in table 5, membership in many of these groups, with whom we consulted, is balanced to include representatives from tax practitioners and preparers, transmitters of electronic returns, tax software developers, large and small businesses, employers and payroll service providers, individual taxpayers, financial industry, academic, trusts and estates, tax exempt organizations, and state and local governments. For 2009, the Internal Revenue Service (IRS) has developed six new optional security and privacy standards which are intended to better protect taxpayer information collected, processed, and stored by online authorized electronic filing transmitters, as shown in table 6. These new standards are based on industry best practices and are intended to supplement the Gramm-Leach-Bliley Act and the implementing rules and regulations promulgated by the Federal Trade Commission. In addition to the contacts named above, Joanna Stamatiades, Assistant Director; Amy Bowser; Debra Conner; Vanessa Dillard; Michele Fejfar; Jyoti Gupta; Jeffrey Knott; Ed Nannenhorn; Madhav Panwar; Joseph Price; and Robyn Trotter made key contributions to this report.
|
Individual taxpayers used commercial tax software to prepare over 39 million tax returns in 2007, making it critical to the tax administration system. The majority were then filed electronically, resulting in fewer errors and reduced processing costs compared to paper returns. GAO was asked to assess what is known about how pricing of tax software influences electronic filing, the extent to which the Internal Revenue Service (IRS) provides oversight of the software industry, and the risks to tax administration from using tax software. To do so, GAO analyzed software prices, met with IRS and software company officials, examined IRS policies, and reviewed what is known about the accuracy, security, and reliability of tax software. IRS has little information about how the pricing of tax software affects taxpayers' willingness to file tax returns electronically. In 2009, the two largest tax software companies eliminated separate fees to file federal tax returns electronically when using software purchased from retail locations or downloaded from a Web site. As a result, IRS has an opportunity to study whether this and other changes are effective in increasing electronic filing. Additionally, IRS would benefit from being able to identify which software package the taxpayer used to better target research and efforts to increase software use and electronic filing. IRS provides some oversight of the tax software industry but does not fully monitor compliance with established security and privacy standards. Further, IRS has not developed a plan to monitor compliance with new standards, which are optional in 2009 but may be mandatory in 2010. Without appropriate monitoring, IRS has limited assurance that the standards are being implemented or complied with. IRS has not conducted an assessment to determine whether taxpayers' use of tax software poses any risks to tax administration. Risks include that IRS may be missing opportunities to systemically identify areas to improve software guidance and enhance information security. IRS officials said the likely benefits of an assessment would not warrant the costs but have not determined either the benefits or costs of such an assessment. Moreover, IRS has also said that it is in the agency's best interest to ensure that taxpayers can rely on commercial software to make electronic filing accurate, easy, and efficient. Further, if even small improvements in the accuracy of tax returns could be made by clarifying the guidance in tax software, the effect on revenue could be substantial. Without a risk assessment, IRS does not know whether its existing oversight of the tax software industry is sufficient or needs to be expanded.
|
As the nation’s largest supplier of hydroelectric power, the Corps generates about 25 percent of all the hydroelectric power in the United States. The Corps operates hydroelectric power plants at 75 dams with a total capacity of about 21,000 megawatts (MW). The total capital investment in these facilities over the years has exceeded $7.9 billion. Southeastern markets power for 23 hydroelectric power plants owned and operated by the Corps to 294 wholesale customers in all or parts of 10 southeastern states and Illinois. Southeastern also coordinates with the Corps on the availability of the power to be generated by the Corps’ plants. Unlike other power marketing administrations, Southeastern owns no transmission assets. Regional public and investor-owned utilities transmit the power to Southeastern’s wholesale customers. The Corps and Southeastern receive congressional appropriations through the Department of Defense - Civil account and the Department of Energy, respectively, to finance their operations. In fiscal year 1996, the Corps received appropriations for its civil works activities totaling about $3.2 billion. Southeastern is responsible for repaying, with interest, its appropriations as well as the portion of the Corps’ construction and operation and maintenance appropriations that are allocated to power. Repairs to and maintenance of the power plants are funded from the Corps’ “construction, general” account or “operations and maintenance, general” account, depending on their scope. Funds from the “construction, general” account are used for major rehabilitation projects that exceed $5 million, including work pertaining to the designs, plans, and specifications for such projects. Major rehabilitation projects are identified at the Corps’ projects and districts, and the ensuing budget proposals are justified, examined, and ranked in the Corps’ field offices and headquarters. The Department of the Army’s Assistant Secretary for Civil Works and the Office of Management and Budget then examine and approve or disapprove the requests for funding for the individual projects. Funds from the “operation and maintenance, general” account are used for routine repairs and maintenance and for emergency repairs of hydroelectric and other facilities. The 11 power plants that we examined account for about 63 percent of Southeastern’s generating capacity. These hydroelectric power plants, located on six river systems, range in generating capacity from 30 to 500 MW. The Corps’ hydroelectric power plants in the Southeast have experienced lengthy outages, resulting in declines in reliability and availability. For example, from 1987 to 1995 the availability of the plants in the Corps’ South Atlantic Division dropped from 95.4 percent to 87.2 percent. Nationwide, during this same period, the availability of the Corps’ hydroelectric power plants dropped from 92.9 percent to 87.9 percent (see app. V). According to Corps officials, the outages have occurred because of the ways in which the units are operated and because they are aging. In a few cases, Corps officials said, the units were also poorly designed and installed. According to Southeastern officials, the outages contributed to revenue losses for Southeastern and led to increases in its wholesale electric rates. From 1986 through 1995, all 11 of the power plants we examined experienced forced and/or scheduled outages, ranging from 30 days to over 3 years. Thirty-seven of the 43 units at these 11 power plants experienced at least one outage (see app. VI), and several units experienced outages simultaneously (see app. VII). For example, from January through March 1993, eight units at the Allatoona, Carters, Hartwell, Robert F. Henry, Millers Ferry, J. Strom Thurmond, and Walter F. George power plants, representing about 395 MW of capacity (or 13 percent of the capacity available to Southeastern from the Corps’ facilities), were out of service at the same time. Many of the Corps’ hydroelectric power plants in the Southeast are aging. The average age is about 30 years, and four have been in service for over 35 years. According to Southeastern officials and studies by the Corps, key components of the hydroelectric units are designed to last about 35 years and can be expected to need repair or replacement. However, according to the Corps, the need to repair or replace a component is based not solely on age, but also on test results and operational performance. For example, in 1984 the responsible Corps district office requested approval to perform a scheduled repair of a generator component at Allatoona—the oldest of the power plants that we examined, which has been in service since 1949. The generator component had reached 35 years—the anticipated end of its useful life—and the unit’s performance had declined in the late 1970s and early 1980s, after a failure in 1967. Corps headquarters did not approve the request because it did not believe that the district had submitted adequate justification. After the unit failed again in 1990, the Corps continued to operate the unit by bypassing the damaged component. In 1991, the Corps’ district office again requested approval to repair the affected generator as well as another unit of similar age. Both units were repaired in 1993 and 1994, at a cost of about $8 million. Also, according to Corps officials, some units are poorly designed by the manufacturer and not properly installed by the contractor, and other units are adversely affected by the way in which they are operated. For instance, the Jim Woodruff power plant has experienced operational problems because its turbines are poorly designed. Specifically, the turbines, intended to function under conditions of changing water flow, experienced severe vibrations and had to be welded in place, leading to decreased efficiency in the power plant when water conditions changed (see app. IV). In addition, according to Corps officials, the conventional hydroelectric generating units at Carters, which are used to start the pumpback units, were not designed to consistently handle startups. Operating the conventional units for startups over the years damaged the insulation in the generators, causing the units to fail. According to a Corps report on the rehabilitation of the Hartwell power plant, Hartwell’s turbines are significantly oversized in comparison with the generators. According to the Corps’ analysis, with the larger turbines and thus greater horsepower available, the generators failed because they were consistently operated at 125 percent of their rated capacity. Southeastern officials added that, in their view, the units failed because they were 30 years old and thus approaching the end of their useful lives. Also, according to Corps officials, four units at the Robert F. Henry power plant required major repairs within 6 years of beginning operation because major components of the generators were not properly manufactured and installed. The components became loose during operations, causing severe vibrations and deterioration of the generators’ insulation. When hydroelectric power plants experience unexpected outages at the same time and/or these outages are extended, utilities generally have to purchase replacement power at higher prices. For example, from 1990 through early 1992, two or more of the four units at the Carters power plant were out of service at the same time for periods ranging from about 3 months to almost 1 year. An official of Southeastern estimated that Southeastern’s utility customers purchased replacement electricity costing about $15 million more than they would have paid for electricity marketed by Southeastern. Extended outages, Southeastern officials estimate, have resulted in lost revenues of about $13 million to Southeastern since fiscal year 1986. The impact was most acute when units at the Carters power plant were out of service. Moreover, according to Southeastern officials, because of the unplanned outages, a severe drought in the late 1980s, and increases in operation and maintenance costs, Southeastern increased its wholesale power rates. For example, customers on the Georgia-Alabama-South Carolina system paid 22 percent more in 1990 than they had in the previous year. According to Southeastern, reductions in the amount of hydroelectric power available because of the drought, combined with the inefficient operation of the Jim Woodruff project, contributed to an increase in the wholesale rates charged to customers of the Jim Woodruff system of nearly 100 percent, phased in from January 1991 to September 1993. Although the Corps recognizes that long-term, comprehensive planning and budgeting systems are needed to identify and fund key repair and rehabilitation projects, especially in the current environment of static or declining budgets, its funding decisions for the power plants are not based on such systems. The Corps gives priority to routine, ongoing maintenance. However, when the power plants experience unplanned outages, the Corps frequently performs repairs that are reactive and short-term. For the extensive repairs and rehabilitations that eventually become essential, the Corps’ budgeting process requires extensive justifications that can take a year or longer to complete. The Corps has taken some actions to address its planning and budgeting needs and recognizes that these efforts should be continued. The Corps’ budget has been declining in real terms over the last 10 years—by about 18 percent between fiscal years 1986 and 1996, from about $3.8 billion to $3.1 billion. According to a report prepared by the Corps’ Institute for Water Resources, because of the need to address the federal budget deficit, this funding trend is expected to continue. In such a budget environment, finding adequate funding to properly maintain, rehabilitate, and repair the aging hydroelectric power plants will be increasingly difficult. Furthermore, the capital investment to maintain and repair the Corps’ power plants is expected to increase by about $1 billion. For example, the Corps stated that from 1993 through 2004, it would spend about $410.3 million to rehabilitate hydroelectric units at eight power plants nationwide. Moreover, the Corps projected that it would need to spend $558 million through the year 2004 to repair and rehabilitate other hydroelectric power plants. The need to spend more to maintain and repair the Corps’ aging hydroelectric power plants will compete with the need to maintain and repair other Corps facilities, such as those related to commercial navigation, flood damage reduction, hurricane and storm damage reduction, and the restoration and protection of environmental resources (including fish and wildlife habitat). For example, with its budget submissions to the Congress, the Corps includes a “capabilities list” that identifies additional funds for necessary repairs and rehabilitations for the power plants, as well as for other purposes—such as dredging, recreation, and navigation—not included in the initial target budget request. For the fiscal year 1996 budget proposal, the list contained repair and rehabilitation projects totaling $72 million—including $8 million for hydroelectric power plants. However, the list does not rank the proposed repair and rehabilitation projects by importance or need. Moreover, according to Southeastern’s Administrator, although Southeastern markets the power generated at the Corps’ power plants, the Corps does not consult Southeastern at the corporate level for budgeting and planning purposes. However, according to Corps and Southeastern officials, the Corps’ South Atlantic Division consults with Southeastern in preparing major rehabilitation proposals and in long- and medium-range planning for maintenance. Moreover, according to Corps and Southeastern officials, the Corps meets with a group of Southeastern’s wholesale customers and with Southeastern at least twice a year to discuss scheduled maintenance and capital projects planned for the next 10 years. According to Southeastern officials, this group is not an advisory group on capital planning and budgetary matters; it only meets to share information. The Corps gives priority to routine, ongoing work, such as the operation of power plants and recreation facilities, or maintenance work that is needed to keep the projects operating through the fiscal year. Nonroutine work or work that can be deferred to the next year has been given lower funding priority. After the Office of Management and Budget informs Corps headquarters of the Corps’ budget ceiling, headquarters sets budget targets for the Corps’ divisions, which in turn set budget targets for the Corps’ districts. The districts decide how to allocate the amounts to various projects within the funding levels established annually by Corps headquarters. The baseline level of funding represents the annual fixed, nondiscretionary costs required to operate and maintain the projects. When major repairs are needed, the Corps must follow a system of approvals and justifications to comply with budgeting procedures and to explain the repairs to such parties as the Department of the Army’s Assistant Secretary for Civil Works and the Office of Management and Budget. Satisfying these requirements delays funding the expensive repairs and rehabilitations needed to keep the hydroelectric system operating effectively. Because of these approvals and justifications, after the need to repair or rehabilitate a plant is identified at the project or district level, it has taken from about 10 months to almost 5 years to begin the needed repairs. Given the emphasis on routine and ongoing maintenance and repair work and the lengthy justification processes that must be followed for extensive repairs when units break down unexpectedly, the Corps frequently performs repairs that are short-term and reactive. However, such actions only postpone the need to make more extensive repairs. For example, after a failure of the Hartwell power plant’s unit 1 in November 1989, the Corps bypassed the damaged part and brought the unit back into service at a reduced operating capacity. Three months later, the unit was taken out of service for 59 days while a contractor replaced the damaged part. Then, in May 1990, the same kind of problem put unit 2 out of service for 54 days. The Corps repaired the unit, but it failed again in January 1992. The Corps bypassed the damaged part and returned the unit to service. The unit continues to operate at a reduced capacity, along with the other three units. As a result of these reductions, Southeastern has lost about 40 MW of capacity. The Corps estimates that it will need about $17.7 million to repair the four units. Before extensive and costly repairs or rehabilitation can begin, in order to justify capital investments, the relevant field location must perform a lengthy study to document the problem. The study can take 18 months to complete, and then another year or longer may be needed for the proposal to clear the review levels within the Corps and receive funding. According to a Corps official, the process is lengthy because (1) the documentation and analysis submitted by field staff do not always satisfy the requirements of Corps headquarters and (2) lengthy examinations and reexaminations of a proposal are required within the field structure, headquarters, the Department of the Army’s Assistant Secretary for Civil Works, and the Office of Management and Budget. A Corps headquarters official explained that this lengthy analysis and documentation process is applied even if a hydroelectric unit is out of service and needs immediate repair because the Corps needs to show the need for costly capital investments in hydroelectric power plants to the Department of the Army’s Assistant Secretary for Civil Works and the Office of Management and Budget. For example, at the three-unit Millers Ferry power plant, one unit failed in 1987 because the insulation in the unit’s generator had deteriorated. The unit was repaired and returned to service within 30 days. After a second unit failed in 1992 for the same reason, the responsible district office requested approval from the division in 1993 to repair all three units. The district office believed that all three units suffered from the same problems and would need repairs in the future. However, Corps headquarters interceded and requested additional analysis and justification to support repairing all three units. During 1993 to 1995, while the district office complied with certain requests from Corps headquarters and completed design specifications and the request for proposal, the remaining two units also failed. These units were temporarily repaired and returned to service but operated at a reduced capacity. As a result, Southeastern lost about 31 MW of capacity. More extensive repairs, according to Corps officials, will not be completed until 1998, at an estimated cost of $7 million. The Corps has recognized that when budgetary resources are relatively scarce, it cannot continue to fund all of the activities it performed in the past, such as operating some recreation sites. Corps officials have also said that in times of budget shortfalls, it becomes increasingly important to implement long-term, systematic, and comprehensive capital planning and budgeting systems. Such systems allow agencies to anticipate projects that need to be funded in the future and to consider the tradeoffs that are inherent in assigning funding to different purposes. Given that obtaining additional funds for hydroelectric investments will be difficult, the Corps began, in the early and mid-1990s, to take steps to improve its corporate planning and budgeting processes. However, these measures are still ongoing. The Corps commissioned a study by its Institute for Water Resources on its capital planning process for hydroelectric power plants. In its 1994 working draft report, the Institute concluded that in light of the power plants’ aging and the continued prospects for budget constraints, the Corps should develop a 10-year plan for future capital investments for its hydroelectric program and develop, in coordination with the power marketing administrations and their customers, procedures for ranking hydroelectric investment needs on the basis of such criteria as economic, environmental, and engineering factors. According to a Corps headquarters official, in response to these recommendations, Corps headquarters directed all of its field locations, including those in the Southeast, to compile lists of proposed, nonroutine hydroelectric capital improvement projects that had to be accomplished within 10 years. Although these lists were compiled on a national level during fiscal years 1993 and 1994, no lists were compiled in fiscal year 1995. The fiscal year 1994 list shows a projected need through 2004 of over $900 million to repair and rehabilitate the Corps’ 75 hydroelectric power plants nationwide. However, the criteria for ranking the proposed repair and rehabilitation projects have not been established. The responsible Corps headquarters official explained that in fiscal year 1995, the effort was suspended because of higher priorities. He said he intends to direct the field locations to undertake the effort again during the summer of 1996, in time to be considered for the fiscal year 1998 budget. Currently, Corps headquarters does not use this list for the agency’s annual budget process but rather encourages its use at the district level for long-range planning. Corps officials said they recognize the need to pursue formal use of the list for planning and budgeting nationwide. In addition, according to a Corps official, the Corps recognized in the early 1990s that the outages at its power plants were reducing the reliability of its hydroelectric power system. Consequently, from fiscal year 1993 through fiscal year 1997, the Corps requested appropriations for major rehabilitations of eight hydroelectric plants, four of which are in the Southeast. In March 1996, the Corps estimated that from 1993 through 2004, it would spend about $410 million to rehabilitate these eight power plants. According to the Corps, as of the end of fiscal year 1996, the Corps had obtained appropriations of about $22 million for this purpose. We provided a draft of this statement to and discussed its contents with Corps officials, including the Chief, Operations, Construction and Readiness (headquarters); Hydropower Coordinator (headquarters); Chief, Construction and Operations Division (South Atlantic Division); and the Chief, Hydropower Operations (South Atlantic Division). We also discussed the statement and its contents with Southeastern officials, including the Administrator; Assistant Administrator for Finance and Marketing; and the Chief, Operations. These officials generally agreed with the facts presented in our statement and said that we had fairly represented the condition of the federal hydroelectric power plants in the Southeast. Corps officials agreed that historically the agency’s planning and budgeting systems did not expedite planning and budgeting for multiple-year capital improvement projects for the Corps’ hydroelectric power plants. Corps officials said, however, that they have taken steps to improve their planning and budgeting systems for these plants. Corps and Southeastern officials also discussed efforts under way within the Corps’ South Atlantic Division to consult with Southeastern and with power customers about the maintenance of the hydroelectric power plants in the region. These officials also suggested several technical revisions to our statement, which we have incorporated as appropriate. We conducted our review from January through June 1996 in accordance with generally accepted government auditing standards. This concludes our prepared statement. It also concludes our work on this issue for the Subcommittee. Details of our objectives, scope, and methodology are presented in appendix VIII. We would be glad to answer any questions you may have at this time. Average age of units (years) Plant’s total nameplate capacity (MW) Millers Ferry began producing power in 1970. The power plant’s three generating units have a history of operational problems, and the Corps has taken remedial action to keep them operational from the outset. However, one of the units has been shut down for nearly 4 years, and the other two units are operating at reduced capacity. Delays in repairs have been caused by the documentation and review the Corps requires to justify expenditures for major repairs. In April 1996, the Corps awarded a contract for major repairs to the units at an estimated cost of $7 million. Millers Ferry Lock and Dam is located in southwest Alabama on the Alabama River. Millers Ferry aids navigation along the Alabama River and generates electric power, which is marketed by Southeastern to wholesale customers. The reservoir and surrounding park have become a popular recreational facility. The power plant’s three 25-MW units have a total nameplate capacity of 75 MW. When the generating units came on line in 1970, they produced extraordinarily high noise and vibration levels, which over the years contributed to the generators’ aging at an accelerated rate. Also, because the noise levels were high enough to damage human hearing, the Corps took several actions to protect personnel in the powerhouse. For example, noise absorbing panels were installed on the ceilings and walls of the powerhouse, and sound enclosures were installed around each of the three generators. Since 1970, the Corps has spent about $700,000 on noise abatement measures. According to the Corps, although the excessive noise is caused by vibration within the generator, the Corps had no recourse against the manufacturer because the design specifications did not address acceptable noise levels. The Corps decided to keep the units operating rather than shut them down to correct the exact cause of the noise. In addition, all three of the power plant’s generators have failed during the past 9 years. Unit 3 failed in June 1987 and was shut down for 27 days for repairs. Unit 1 failed in July 1992, and the damage was so extensive that the unit has been shut down for nearly 4 years. Unit 3 failed again in June 1994 and was shut down for 21 days for repairs. The most recent failure occurred when unit 2 failed in November 1995 and was shut down for 45 days for repairs. However, after units 2 and 3 were temporarily repaired and returned to service, they were operated at reduced capacity to prevent further damage. As a result, Southeastern lost about 31 MW of capacity. The Corps attributes these failures to deterioration in the generators’ insulation caused by frequent changes in internal temperatures. According to the Corps, the insulation used in the units is not as tolerant of heat as the insulation used in older units in other power plants. In addition, the enclosures installed around the generators for noise abatement increased the operating temperatures, thus shortening the life of the units. These enclosures also contributed to increases in maintenance costs and in the time needed to perform maintenance because they make it more difficult for repair crews to access the generators. For example, it takes three employees 6 days to disassemble and then reassemble a noise abatement enclosure to access a generator. According to a Corps official, the delay in repairing the units has been caused primarily by the internal documentation and review process that the Corps requires to justify expenditures for major repairs. After unit 1 failed in 1992, the Corps’ district office in January 1993 requested approval from the Corps’ division to repair not only unit 1 but also the other two units, which were in poor operating condition. The district estimated that a contract could be awarded by April 1994. However, Corps headquarters interceded and requested additional documentation to support the repair of all three units. A Corps headquarters official said the district office had not provided the required analyses and justifications for the proposed repair work. The official said that this documentation is necessary to satisfy Corps management, the Department of the Army’s Assistant Secretary for Civil Works, and the Office of Management and Budget of the need to make extensive repairs. As noted earlier, the Corps did not award the contract for the repairs until April 1996, more than 3-1/2 years after unit 1 failed. The other two units also failed during the intervening period, while the Corps’ district office complied with the Corps headquarters’ request for additional reports, including tests and economic analysis, and completed design specifications and request for proposal. The Corps estimates that the repair of the three units will cost $7 million and will be completed in early 1998. The Jim Woodruff power plant has a long history of operational problems stemming from the poor initial design of the turbines and changing operating conditions. The plant has experienced major outages resulting in costly repairs. Over the years, the Corps has taken remedial measures that permitted continued use of the plant but at the same time limited the plant’s range of operations and efficiency. Because of increasing operational costs and declining efficiency, the Corps requested federal funds to repair the plant. In November 1995, the Congress approved the Corps’ plan to rehabilitate the plant. The Corps estimates that the cost of repairs will be over $30 million. Jim Woodruff Lock and Dam is a multipurpose project located 37 miles northwest of Tallahassee, Florida, on the Apalachicola River. In addition to generating electric power for northern Florida, the project aids navigation on the Apalachicola River below the dam and on the Chattahoochee and Flint Rivers above the dam. The navigation lock serves commercial water transportation and recreational boating. The power plant has been producing electric power since 1957. It has a total nameplate capacity of 30 MW, provided by three 10-MW generating units. The plant provides over 200 million kilowatt-hours of energy per year to Southeastern, which markets the energy to six wholesale customers in Florida. Small amounts of excess energy are sold to the Florida Power Corporation. The plant has experienced problems with reliability since the 1970s. Combined with the age of the plant (39 years), the cumulative effects of the poor initial design of the turbine and erosion of the downstream river channel since the plant was constructed have caused major outages, reduced efficiency, increased operations and maintenance costs, and reduced revenues to Southeastern. The plant’s variable pitch turbines are a unique design—only eight were ever manufactured. The turbines were designed for variable pitch in order to operate efficiently under a wide range of water flow conditions. For two of the plant’s three turbines, the operating linkages that allow the variable pitch feature to function have failed. In addition, erosion of the downstream river channel since the plant was constructed and the resulting increase in the operating head have placed major stress on the turbine blades. The operating heads at the plant routinely exceed those for which the turbines were originally designed, thus decreasing the extent to which the turbines are submerged. As a result, the units have exhibited increasingly severe vibration problems, leading to outages for repairs. Major outages have continued since the 1970s. For example, after unit 1 was shut down in October 1977 for 2 days for repairs, it was shut down again from July 1983 to May 1984, for a 313-day outage, and in April 1988, for a 60-day outage. In unit 2, cracks in the turbine blades were repaired in 1974; the Corps made additional repairs to the unit from July 1986 through February 1987, for a 207-day outage, and again in December 1988, for a 5-day outage. In unit 3, the Corps discovered and repaired cracks in the turbine blades in 1974 and 1975; additional repairs were made in April 1987, for a 59-day outage. Because of continuing operational problems, the Corps welded the plant’s turbine blades into a fixed position in 1988. This action improved the availability of the plant, but reduced the plant’s efficiency, because fixed turbine blades cannot be adjusted to take advantage of the varying release rates necessary to maintain adequate water depths for navigation. Loss of efficiency reduces the amount of energy that can be produced at the power plant, affecting its ability to fulfill contracts for power generation. The Corps estimated that the plant’s average annual output has been reduced by about 17 percent, or over 36 million kilowatt-hours per year, because of the welding of the blades into a fixed position. In addition, the costs of operating and maintaining the plant have increased over the years. According to the Corps, these increases are attributable to major maintenance work, the design and specifications for the major rehabilitation, and the addition of on-site operators. Five maintenance personnel are directly assigned to the plant, and additional personnel are brought in from other projects if the maintenance is extensive. The operational problems at Woodruff prompted Southeastern to complain to the Corps about a loss of revenue. In a letter to the Corps dated September 19, 1990, Southeastern expressed concerns about the plant’s operations, stating that the plant has not been able to operate at its fullest, resulting in reduced output and a loss in revenue. According to Southeastern, the Corps had to spill water because of the need to decrease the vibrations that occurred as the units operated. Southeastern added that the loss to its customers was even greater because the customers must replace the missing power by purchasing power from another utility at a higher rate. The letter further stated that the plant’s inefficient operation and the resulting loss in revenue had a significantly negative impact on the repayment schedule for the project and had caused Southeastern to seek a substantial increase in the power rates charged to its customers. According to Southeastern, the combined effects of the plant’s inefficient operation and the droughts of the late 1980s caused it to raise its wholesale customer rates on the Jim Woodruff system by nearly 100 percent from 1991 to 1993. Because of the plant’s increasing operations and maintenance costs and declining efficiency, the Corps in 1991 started a study for a major rehabilitation of Woodruff. The study, completed in 1993, recommended replacing the three turbines, rehabilitating the three generators, and replacing several peripheral electrical components, most notably the transformers, to restore the plant’s lost reliability and efficiency. According to the Corps, the field office submitted the major rehabilitation report to Corps headquarters in March 1992 for fiscal year 1994 funding. Corps headquarters rejected the report in May 1992. In March 1993, the field resubmitted the report to headquarters for fiscal year 1995 funding, and headquarters approved it in November 1993. However, the major rehabilitation plan, included in the Corps’ fiscal year 1995 budget request, was rejected by the Office of Management and Budget because the President’s fiscal year 1995 budget did not include any money for “new starts.” In 1994, the Corps included the plan as part of its fiscal year 1996 budget, and it was approved by the Office of Management and Budget in December 1994. In November 1995, the Congress made funds available to the Corps to rehabilitate the plant. Thus, it took the Corps about 2 years to prepare and approve the rehabilitation study and another 2 years to get congressional approval for the funding. It is estimated that the rehabilitation will be completed in 2001—about 10 years after the beginning of the study. The Corps estimates that the cost to rehabilitate the plant will be $30,600,000. Duration (days) Generator repair (forced) Generator repair (scheduled) Inspection (scheduled) Turbine repair (scheduled) Generator repair (forced) Turbine repair (scheduled) Turbine repair (forced) Generator repair (forced) Generator repair (forced) Inspection (scheduled) Inspection and turbine repair (scheduled) Generator repair (scheduled) Inspection (scheduled) Generator repair (scheduled) Generator repair (forced) Turbine repair (forced) Inspection and turbine repair (scheduled) Generator repair (forced) Transformer repair (forced) Transformer repair (forced) Testing for plant rehabilitation report (scheduled) Transformer repair (forced) Turbine repair (forced) Transformer repair (forced) Transformer repair (forced) Inspection and maintenance (scheduled) Turbine repair (scheduled) Turbine inspection and repair (scheduled) Inspection and generator repair (scheduled) Generator repair (scheduled) Generator repair (forced) Generator repair (forced) Generator repair (forced) (continued) Duration (days) Generator repair (forced) Generator repair (scheduled) Generator repair (forced) Generator repair (forced) Excessive ozone emissions (forced) Excessive ozone emissions and generator repair (forced) Generator repair (forced) Inspection (scheduled) Inspection (scheduled) Generator repair (scheduled) Inspection and turbine repair (scheduled) Inspection and turbine repair (scheduled) Inspection (scheduled) Turbine repair (scheduled) Inspection (scheduled) Inspection (scheduled) Inspection (scheduled) Inspection (scheduled) Generator, turbine, and transformer repairs (scheduled) . . . . . On December 18, 1995, the Chairman, Subcommittee on Water and Power Resources, House Committee on Resources, requested that we examine certain operational and financial issues related to the Department of Energy’s power marketing administrations. As agreed in subsequent discussions, this statement focuses on the maintenance and operational efficiency of the hydroelectric power plants operated by the Corps that generate the power marketed by the Southeastern Power Administration (Southeastern). Specifically, we examined the extent to which (1) these power plants are experiencing outages and (2) the current planning and budgeting processes allow the Corps to perform timely and effective repairs and rehabilitations of its hydroelectric assets. To determine the extent to which the Corps’ hydroelectric power plants in the Southeast are experiencing outages, we interviewed Corps officials in Washington, D.C.; Atlanta, Georgia; Savannah, Georgia; and Mobile, Alabama. We also contacted the Administrator and former Acting Administrator of the Southeastern Power Administration and other agency officials in Elberton, Georgia, and at the Department of Energy’s headquarters in Washington, D.C. From the Corps’ headquarters and South Atlantic Division, we obtained operating statistics (i.e., nameplate capacity) and information on the plants’ reliability and availability. We also obtained data on plant outages from 1986 through 1995. We focused on outages of 30 days or longer in order to avoid less important outages and discussed maintenance procedures with Corps and Southeastern officials. From Southeastern, we obtained estimates of the reduced revenues and increased rates that resulted from outages at the hydroelectric power plants in the Southeast; however, these estimates pertained to specific outages and were not applicable to the entire electric system from which Southeastern markets power. To explore in depth the reasons for any outages and the way the Corps responded to them, we concentrated our efforts on 11 hydroelectric power plants (which include 43 generating units) operated by the Corps’ South Atlantic Division (Atlanta, Georgia) and the division’s districts in Savannah, Georgia, and Mobile, Alabama. These 11 plants on the combined Georgia-Alabama-South Carolina and Jim Woodruff systemshave a total generator nameplate capacity of 1,960 MW (about 63 percent of the generating capacity from which Southeastern markets power) and account for 71 percent of Southeastern’s power revenues. Corps and Southeastern officials agreed that these plants were generally representative in age and operating condition of the plants from which Southeastern markets power. For example, we selected power plants ranging in age from relatively new (11 years) to relatively old (47 years) and ranging in capacity from relatively small (30 MW) to relatively large (500 MW). From the 11 plants, we selected Millers Ferry and Jim Woodruff for more detailed case-study analysis. Although all 11 power plants we reviewed experienced outages from 1986 through 1995, these two plants had experienced lengthy outages stemming from problems in their design and the installation of their equipment. To determine whether the current planning and budgeting processes allow the Corps to perform timely and effective repairs and rehabilitations of its hydroelectric power plants, we obtained and reviewed Corps budget data from the agency’s headquarters in Washington, D.C., and South Atlantic Division. We analyzed trends in the availability of appropriated funds from fiscal years 1986 through 1996, and we adjusted the funds for inflation by applying the gross domestic product deflators for the appropriate years. We attempted to determine the exact amounts of funds requested, appropriated, and spent for the operation, maintenance, rehabilitation, and repair of the 11 Corps hydroelectric power plants in our study from fiscal years 1986 through 1996. However, the data we requested were either not available or were not reported consistently by the Corps. We also interviewed representatives of the Corps, Southeastern, and the association of Southeastern’s wholesale customers to obtain their views on the adequacy of the funding for operating, maintaining, and rehabilitating the Corps’ hydroelectric power plants. We interviewed Corps budgeting and planning officials at headquarters, the South Atlantic Division, the Savannah District, and the Mobile District and obtained the guidelines for compiling annual budgets and studies on ways in which the Corps could improve its budgeting and planning systems. We reviewed lists compiled by Corps headquarters and the field offices on repairs that have been proposed over the next 10 years and the cost of these repairs. We also obtained Southeastern’s views on the Corps’ planning and budgeting functions and Southeastern’s role in those processes. We performed our work from January through June 1996 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the maintenance and repair of hydroelectric powerplants operated by the Army Corps of Engineers in the Southeast, focusing on the extent to which: (1) the power plants are experiencing outages; and (2) planning and budgeting processes allow the Corps to perform timely and effective repairs and rehabilitation of its hydroelectric assets. GAO noted that: (1) due to lengthy outages, the plants' availability to generate power dropped from 95.4 percent to 87.2 percent and have lost about $13 million since fiscal year (FY) 1986; (2) some of the plants' units are aging, need repair or replacement, were poorly designed, were not properly installed, and have been adversely affected by the way they have been operated; (3) the need to spend more to maintain and repair the aging hydroelectric power plants will compete with the need to maintain and repair other Corps facilities; (4) the Corps' emphasis on routine, ongoing maintenance and repair work and the lengthy justification process for extensive work delays long-term repairs; and (5) the Corps is addressing the plants' planning and budgeting needs.
|
The SSO program covers all states with fixed guideway systems operating in their jurisdictions. FTA defines a rail fixed guideway system as any light, heavy, or rapid rail system, monorail, inclined plane, funicular, trolley, or automated guideway that is not regulated by the Federal Railroad Administration (FRA) and is included in FTA’s calculation of fixed guideway route miles, or receives funding under FTA’s formula program for urbanized areas, or has submitted documentation to FTA indicating its intent to be included in FTA’s calculation of fixed guideway route miles to receive funding under FTA’s formula program for urbanized areas. Figure 1 shows the types of systems that are included in the SSO program. In the SSO program, state oversight agencies are responsible for directly overseeing rail transit agencies. As of December 2009, 27 state oversight agencies exist to oversee rail transit in 26 states. According to FTA, states must designate an agency to perform this oversight function at the time FTA enters into a grant agreement for any “New Starts” project involving a new rail transit system, or before a transit agency applies for FTA formula funding. States have designated several different types of agencies to serve as oversight agencies, including state departments of transportation, public utilities commissions, or regional transportation funding authorities. FTA has a set of rules that an oversight agency must follow, such as developing a program standard that transit agencies must meet, reviewing transit agencies’ safety and security plans, conducting safety audits, and investigating accidents. In the program, rail transit agencies are mainly responsible for meeting the program standards that oversight agencies set out for them, which generally include developing a separate safety and security plan, developing a hazard management process, reporting accidents to oversight agencies within 2 hours, and other similar tasks. Under the program, FTA provides limited funding to oversight agencies in only limited instances, generally for travel or training, under the program. While oversight agencies are to include security reviews as part of their responsibilities, TSA also has security oversight authority over transit agencies. (See fig. 2 showing roles and responsibilities of participants in the program.) FTA’s role in overseeing safety and security of rail transit is relatively limited. FTA relies on a staff member in its Office of Safety and Security to lead the SSO program. A program manager is responsible for the SSO program along with other duties. Additional FTA staff within the Office of Safety and Security assist with outreach to transit and oversight agencies and additional tasks. FTA regional personnel are not formally involved with the program’s day-to-day activities, but officials from FTA regional offices help address specific compliance issues that occasionally arise and help states with new transit agencies establish new oversight agencies. FTA also relies on contractors to do many of the day-to-day activities, ranging from developing and implementing FTA’s audit program of state oversight agencies to developing and providing training classes on system safety. Rail transit has been one of the safest modes of transportation in the United States. For example, according to DOT, in 2008, 57.7 people were injured traveling in motor vehicle accidents per 100 million miles traveled and 5.5 people were injured in commuter rail accidents per 100 million miles traveled. For rail transit, the rate was 0.5 people injured per 100 million miles traveled. The injury rate on rail transit has varied from 0.2 to 0.9 injuries per 100 million miles traveled since 2002. Also, the Washington Metro Red Line accident this summer marked the first fatalities involving a collision between two rail cars on a U.S. rail transit system in 8 years. However, according to FTA officials, the recent major incidents in Boston, San Francisco, and Washington have increased their concern about rail transit safety. In addition, FTA states that the number of derailments, worker injuries, and collisions has increased on rail transit systems as a whole in the last several years. Our 2006 report found that officials from the majority of oversight and transit agencies with whom we spoke stated that the SSO program enhances rail transit safety. Officials at several transit agencies cited improvements in reducing the number of derailments, fires, and collisions through actions undertaken as a result of their work with state oversight agencies. However, despite this anecdotal evidence, FTA had not definitively shown that the program had enhanced safety because it had neither established performance goals nor tracked performance. Also, FTA had not audited each state oversight agency in the previous 3 years, as the agency had stated it would. Therefore, FTA had little information with which to track oversight agencies’ performance over time. We recommended that FTA set and monitor performance goals for the SSO program and keep to its stated schedule of auditing state oversight agencies at least once every 3 years. Although FTA officials pointed out that tracking safety performance would be challenging in an environment where fatalities and incidents were low, they agreed to implement our recommendation. FTA assigned the task to a contractor and said that it would make auditing oversight agencies a priority in the future. We also found that FTA faced several challenges in assuring the effectiveness of the program and recommending improvements to transit agency safety practices. Funding challenges limited staffing levels and effectiveness. Officials at several state oversight agencies we spoke with stated that since FTA provided little to no funding for rail transit safety oversight functions, and because of competing priorities for limited state funds, they were limited in the number of staff they could hire and the amount of training they could provide. While FTA requires that states operate safety oversight programs, capital and operating grants are not available to support existing state oversight agencies once passenger service commences. FTA, however, has begun to provide training for state oversight agency staff. With the current financial crises most states are experiencing, states face increasing challenges in providing adequate funding for state oversight agencies. Also, in our 2006 report, we found that 10 state oversight agencies relied on the transit agencies they oversaw for a portion of their budgets. In those cases, the oversight agencies required that the transit agency reimburse the oversight agency for its oversight expenses. Expertise varied across oversight agencies. The level of expertise amongst oversight staff varied widely. For example, we found that 11 oversight agencies had staff with no previous career or educational background in transit safety or security. Conversely, another 11 oversight agencies required their staff to have certain minimum levels of transportation education or experience, such as having 5 years of experience in the safety field or an engineering degree. In the agencies in which oversight officials had little or no experience in the field, officials reported that it took several years before they became confident that they knew enough about rail transit operations to provide effective oversight— a process that new staff would likely have to repeat when the current staff leave their positions. Officials from 18 of the 24 oversight agencies with whom we spoke stated that additional training could be useful in providing more effective safety oversight. FTA, under the current system, does not have the authority to mandate a certain level of training for oversight agency staff. In response to our prior recommendation, FTA has created a recommended training curriculum and is encouraging oversight agency staff to successfully complete the curriculum and receive certification for having done so. Staffing levels varied across oversight agencies. The number of staff that oversight agencies devoted to safety oversight also varied. For example, we found that 13 oversight agencies dedicated less than one full- time equivalent (FTE) staff member to oversight. While in some cases the transit agencies overseen were small, such as a single streetcar line, we found one state that estimated it devoted 0.1 FTE to oversight of a transit agency that averaged 200,000 daily trips. Another state devoted 0.5 FTE to overseeing five different transit systems in two different cities. To help ensure that oversight agency staff were adequately trained for their duties, we recommended that FTA develop a suggested training curriculum for oversight agency staff and encourage those staff to complete it. FTA implemented our recommendation and over 50 percent of state oversight agencies have staff who have completed at least the first tier of this training. Still, the number of staff devoted to safety oversight remains potentially problematic. FTA currently does not require that states devote a certain level of staffing or financial resources to oversight; without additional funding from the federal government or another source, and due to the fiscal difficulties most states are now experiencing, it is unlikely states will independently increase staffing for safety oversight. FTA, however, has asked many SSO agencies to perform formal manpower assessments to ensure they have adequate resources devoted to oversight functions. Enforcement powers of oversight agencies varied. The individual authority each state oversight agency has over transit agencies varies widely. While the SSO program gives state oversight agencies authority to mandate certain rail safety practices, it does not give them authority to take enforcement actions, such as fining an agency or shutting down operations. Some states have given their oversight agencies such authority, however. In our 2006 report, we stated that 19 of 27 oversight agencies had no punitive authority, such as authority to issue fines, and those that did have such authority stated that they rarely, if ever, used it. While taking punitive action against a rail transit agency could be counterproductive (by, for instance, withholding already limited funding), several oversight agency officials told us the threat of such action could potentially make their agencies more effective and other DOT modal administrations with safety oversight authority can level fines or take other punitive action against the entities they oversee. Confusion existed about agency responsibilities for security oversight. Our 2006 report also found that the transit and oversight agencies were confused about the role TSA would take in overseeing security and what role would be left to the state oversight agencies, if any. We made recommendations to TSA and FTA to coordinate their security oversight activities. The agencies agreed and FTA officials reported they are now coordinating their audits with TSA. DOT is planning to propose major changes in FTA’s role that would shift the balance of federal and state responsibilities for setting safety standards for rail transit agencies and overseeing their compliance with those standards. Based on information provided to us by DOT, the department plans to propose a new federal safety program for rail transit, at an unspecified future date, with the following key elements: FTA, through legislation, would receive authority to establish and enforce minimum safety standards for rail transit systems not already regulated by FRA. States could become authorized to enforce the federal minimum safety standards by submitting a program proposal to FTA and receiving approval of their program. In determining whether to approve state safety programs, FTA would consider a state’s capability to undertake rail transit oversight, including staff capacity, and its financial independence from the transit systems it oversees. DOT would provide federal assistance to approved state safety programs. Participating states could set more stringent safety standards if they choose to do so. In states that decide to “opt out” of participation or where DOT has found the program proposals inadequate, FTA would oversee compliance with and enforce federal safety regulations. These changes would give FTA the authority to directly regulate rail transit safety and, in cooperation with the states, to oversee and enforce compliance by rail transit systems with these regulations. These changes would bring its authority more in line with that of other modal administrations within DOT. For example, FRA, Federal Motor Carrier Safety Administration, Federal Aviation Administration, and Pipeline and Hazardous Materials Safety Administration promulgate regulations and technical standards that govern how vehicles or facilities in their respective modes must be operated or constructed. In addition, each of these agencies use federal or state inspectors, or a combination of both, to determine compliance with the safety regulations and guidance they issue. Finally, these agencies can mandate corrective actions and levy fines to transportation operators, among other actions, for noncompliance with regulations. The new program DOT is planning to propose has the potential to address some challenges and issues we cited in our 2006 report. The consideration of staffing levels in deciding whether to approve states’ proposed programs and the provision of funds to approved programs could increase levels of staffing. Requiring that participating states not receive funds from transit agencies would make the state agencies more independent of the transit agencies they oversee. Providing FTA and participating states with the authority to enforce minimum federal safety standards across the nation’s transit systems could help ensure compliance with the standards and improved safety practices, and might prevent some accidents as a result. While the new program, as envisioned by DOT, may have some potential benefits, our work on the SSO program, other transit programs, and regulatory programs suggests there are a number of issues Congress may need to consider in deciding whether or how to act on DOT’s proposal. Roles of the states versus FTA. The following questions would need to be considered when determining whether changes are needed in the balance of federal versus state responsibility for establishing rail transit safety: Are uniform federal standards and nationwide coverage essential to achieving rail transit safety? Which level of government, state or federal, has the capacity to do the job at hand, taking into account such factors as resources and enforcement powers? In addition, shifting federal-state responsibilities for oversight of rail transit safety would bring a number of operational challenges. These include finding the appropriate level of FTA oversight of state programs and allocating costs between the federal government and the states. The new oversight system to be proposed would potentially involve major changes in the way states interact with FTA in overseeing transit safety. The new balance of state and federal responsibilities could take some time for transit agencies to adjust to, especially those that would now be reporting directly to federal officials. Adequate staff with needed skills. FTA would need to ensure it has adequate qualified staff to oversee safety under the new program, especially in states that opt out of participating in the new program. FTA’s current safety staff is very small as is the staff devoted to rail transit safety oversight in most state agencies. Building the capability within FTA, its contractors, and these state agencies to develop and carry out the envisioned program would pose a number of challenges. However, the actions FTA has taken in response to our 2006 recommendation to institute a training curriculum for oversight agency staff, would give it a head start on this process. Enforcement. Congress would need to determine which enforcement mechanisms to authorize FTA to use and FTA would need to develop an enforcement approach that makes the best use of these enforcement mechanisms. Other DOT modal administrations with safety oversight responsibilities, such as the Federal Aviation Administration and FRA, are authorized to issue fines or civil penalties to operators that violate regulations. However, transit agencies are usually publicly owned and face many financial challenges. As a result, fines and penalties could be counterproductive to enhancing safety when funding is at a premium and local riders or taxpayers ultimately could bear the cost of fines. Other enforcement tools are options. For example, FRA may order a locomotive, freight car, or passenger car out of service or may send warning letters to individuals if a safety violation is found or if an individual is not following safety procedures, among other enforcement actions. Cost. According to FTA officials, their estimates of the total cost of the new program the department plans to propose are very preliminary. Better estimates of what, if any, costs that states would bear under the new system will also be important before moving forward with this proposal. This could include considering any estimated costs the federal government would incur under various scenarios based on how many states opt out and how many new federal employees or contractors would be required under each scenario to act as trainers, inspectors, and administrative staff. Currently, states bear most of the costs for transit safety oversight. Determining these additional costs would be added as the federal and state governments face significant increasing fiscal pressures. Further, it is uncertain how the program will be paid for. Congress will need to determine if riders, states, those who pay taxes to the Highway Trust Fund, or the Department of the Treasury, or a combination of sources, would bear the cost of this program. In addition to the issues that Congress may need to address, FTA would face some challenges in implementing a new system of transit safety oversight. These include: Variations in the different types of transit. The U.S. rail transit system consists of several different types of vehicles, from heavy and light rail to monorails and funiculars or inclined planes. These vehicles operate on different kinds of track with different power sources and can vary from new modern vehicles to vehicles that are 30 or more years old. Setting federal safety regulations for these varying systems could be a lengthy process and could require multiple parallel rulemakings. Transition to the new system. If the new safety oversight system is approved, it will take some time to transition to the new system. States currently performing safety oversight that opt out in favor of federal oversight will likely need to continue to perform their oversight functions until FTA has additional staff and an enforcement mechanism in place. However, a state may be less likely to replace staff who leave or ensure staff in place stay adequately trained if the state is in the process of giving over its oversight responsibilities to FTA. While the likely effect of this may be minimal, this situation could create the possibility of relaxed oversight during the transition period. As part of our ongoing review of challenges to improving rail transit safety, we will review states’ and FTA’s current efforts to oversee and enhance rail transit safety as well as DOT’s efforts to strengthen the federal role in overseeing rail transit safety. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee might have. For further information on this statement, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Individuals making key contributions to this testimony were David Wise, Director; Catherine Colwell, Judy Guilliams- Tapia, and Raymond Sendejas, Assistant Directors; Timothy Bober; Martha Chow; Antoine Clark; Colin Fallon; Kathleen Gilhooly; David Goldstein; Joah Iannotta; Hannah Laufe; Sara Ann Moessbauer; and Stephanie Purcell. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Rail transit generally has been one of the safest forms of public transportation. However, several recent notable accidents are cause for concern. For example, a July 2009 crash on the Washington Metro Red Line resulted in nine deaths. The federal government does not directly regulate the safety of rail transit. Through its State Safety Oversight program, the Federal Transit Administration (FTA) requires states to designate an oversight agency to directly oversee the safety of rail transit systems. In 2006, the Government Accountability Office (GAO) issued a report that made recommendations to improve the program. The Department of Transportation (DOT) is planning to propose legislation that, if passed, would result in a greater role for FTA in regulating and overseeing the safety of these systems. This statement (1) summarizes the findings of GAO's 2006 report and (2) provides GAO's preliminary observations on key elements DOT has told us it will include in its legislative proposal for revamping rail transit safety oversight. It is based primarily on GAO's 2006 report, an analysis of the Administration's proposal through review of documents and interviews with DOT officials, and GAO's previous work on regulatory programs that oversee safety within other modes of transportation. GAO's 2006 report was based on a survey of the 27 state oversight agencies and transit agencies covered by FTA's program. GAO provided a draft of this testimony to DOT officials and incorporated their comments as appropriate. GAO's 2006 report found that officials from the majority of the state oversight and transit agencies stated that the State Safety Oversight program enhances rail transit safety but that FTA faced several challenges in administering the program. For example, state oversight agencies received little or no funding from FTA and had limited funding for staff. In fact, some required that the transit agencies they oversaw reimburse them for services. Also, expertise, staffing levels, and enforcement powers varied widely from agency to agency. This resulted in a lack of uniformity in how oversight agencies carried out their duties. As of 2006, 13 oversight agencies were devoting the equivalent of less than one full-time employee to oversight functions. Also, 19 oversight agencies GAO contacted lacked certain enforcement authority, such as authority to issue fines, and those that did have such authority stated that they rarely, if ever, used it. DOT is planning to propose major changes in FTA's role that would shift the balance of federal and state responsibilities for oversight of rail transit safety. According to DOT officials, under this proposal, the agency would receive authority to establish and enforce minimum standards although states still could maintain an oversight program. States could become authorized to enforce these standards if FTA determines their program capable and financially independent of the transit system they oversee. FTA would provide financial assistance to approved programs. Such changes would have the potential to address challenges GAO cited in its 2006 report. For example, providing funding to participating state agencies could help them maintain an adequate number of trained staff, and providing FTA and participating states with enforcement authority could help better ensure that transit systems take corrective actions when problems are found. Congress may need to consider several issues in deciding whether or how to act on DOT's proposal. These include determining whatlevel of government has the best capacity to oversee transit safety, ensuring that FTA and state oversight agencies would have adequate and qualified staff to carry out the envisioned program, and understanding the potential budgetary implications of the program.
|
Definition: When ll people ll timeve oth phyicnd economic ccess to sufficient food to meet their dietry need for prodctive nd helthy life. Food ilability—chieved when sufficient uantitie of food (supplied throgh household prodction, other dometic otpt, commercil import, or food assnce) re contently ilable to ll individua within contry. Food insecurity—the lack of access of all people at all times to sufficient, nutritionally adequate, and safe food, without undue risk of losing such access—results in hunger and malnutrition, according to FAO. FAO estimates that 90 percent of the hungry suffer from chronic malnutrition. About 80 percent of the hungry worldwide live in rural areas—about half of them are smallholder peasants; 22 percent are landless laborers; and 8 percent live by using natural resources, such as pastoralists. Inadequate food and nutrition have profound impacts. Undernourished children have a smaller chance of survival and suffer lasting damage to their mental and physical development. In addition, work productivity is often impaired among undernourished adults. Food aid has helped to address the immediate nutritional requirements of some vulnerable people in the short term, but food aid has not addressed the underlying causes of persistent food insecurity. Food ccess—ensured when household nd ll individua within them hve dequate rerce to oin pproprite food for tritious diet. World leaders have agreed upon two different goals to halve world hunger by 2015: the first, established at the 1996 WFS in Rome, is to halve the total number of undernourished people worldwide; while the second, the first of eight UN MDGs set in 2000, also referred to as MDG-1, aims to eradicate extreme poverty and hunger by halving the proportion of undernourished people from the 1990 level by 2015. Both of these goals apply not only globally but also at the country and regional levels. Although both the WFS and MDG targets to cut hunger are based on FAO’s estimates of the number of undernourished people, because the MDG target is defined as the ratio of the number of undernourished people to the total population, it may appear that progress is being made when population increases even though there may have been no reduction in the number of undernourished people, according to FAO. Figure 1 is a timeline of some of the key events related to food security and the WFS and MDG targets. To reach the goal set at the 1996 WFS, world leaders approved a Plan of Action, the focus of which is to assist developing countries in becoming more self-reliant in meeting their food needs by promoting broad-based economic, political, and social reforms at the local, national, regional, and international levels. The WFS participants endorsed various actions but did not enter into any binding commitments. They agreed to review and revise their national plans, programs, and strategies, where appropriate, to achieve food security that is consistent with the WFS Plan of Action. Participants also agreed to submit periodic reports to FAO’s Committee on World Food Security (CFS) on the implementation of the Plan of Action to track progress on food security. To monitor progress toward the target of halving the number of undernourished people worldwide, FAO periodically updates its estimates of the undernourished population at the global level as well as at the country level. FAO publishes these estimates in its annual report on The State of Food Insecurity in the World (SOFI), which was first issued in 1999. The same estimates are used by the UN to track progress toward the MDG hunger goal. As shown in figure 2, food insecurity in sub-Saharan Africa is severe and widespread. According to FAO’s estimates, one out of every four undernourished people in the developing countries lives in sub-Saharan Africa. This region also has the highest prevalence of food insecurity, with one out of every three people considered undernourished. In April 2008, FAO reported that 21 countries in sub-Saharan Africa, out of 37 countries worldwide, were critically food-insecure and required external assistance. Sub-Saharan Africa has not made much progress toward the WFS and MDG hunger goals to halve, respectively, the total number of and the proportion (or the percentage) of undernourished people by 2015. Between the periods of 1990 to 1992 and 2001 to 2003, the number of undernourished people in the region increased from 169 million to 206 million, and decreased in only 15 of the 39 countries for which data were reported. The prevalence of hunger, or the proportion of undernourished people in the population, has declined slightly, from 35 percent in 1990 to 1992 to 32 percent in 2001 to 2003—but this change is due to population growth. According to FAO’s projections, the prevalence of hunger in sub- Saharan Africa will decline by 2015, but the number of hungry people will not fall below the 1990 to 1992 levels. By 2015, FAO estimates that sub- Saharan Africa will have 30 percent of the undernourished population in developing countries, compared with 20 percent in 1990 to 1992. These data suggest that sub-Saharan Africa needs to substantially accelerate progress if it is to meet the WFS and MDG targets by 2015. Figure 2 shows the prevalence of undernourishment around the world and also shows, for each of the four selected countries in East Africa and southern Africa that we focused on in our review, the progress needed to reduce the number of undernourished people to meet the WFS and MDG targets by 2015. The principal development partners that implement programs to advance agriculture and food security in sub-Saharan Africa are as follows: Regional organizations and host governments: At the regional level, the primary vehicle for addressing agricultural development in sub-Saharan Africa is the New Partnership for Africa’s Development (NEPAD) and its Comprehensive Africa Agriculture Development Program (CAADP). The African Union (AU) established NEPAD in July 2001 as a strategic policy framework for the revitalization and development of Africa. In 2003, AU members endorsed the implementation of CAADP, a framework that is aimed to guide agricultural development efforts in African countries, and agreed to allocate 10 percent of government spending to agriculture by 2008. Subsequently, member states established a regionally supported, country-driven CAADP roundtable process, which defines the programs and policies that require increased investment and support by host governments; multilateral organizations, including international financial institutions; bilateral donors; and private foundations. According to USAID officials, the CAADP roundtable process is designed to increase productivity and market access for large numbers of smallholders and promote broad-based economic growth. At the country level, host governments are expected to lead the development of a strategy for the agricultural sector, the coordination of donor assistance, and the implementation of projects and programs, as appropriate. Multilateral organizations: Several multilateral organizations and international financial institutions implement programs that contribute to agricultural development and food security—providing about half of the donor assistance to African agriculture in 2006. These entities include the following Rome-based UN food and agriculture agencies: FAO, whose stated mandate is to achieve food security for all and lead international efforts to defeat hunger; WFP, which is the food aid arm of the UN; and IFAD, which finances (through loans and grants) efforts in developing countries to reduce rural poverty, primarily through increased agricultural productivity, with an emphasis on food production. IFAD and other international financial institutions, such as the World Bank and the African Development Bank, play a large role in providing funding support for agriculture. For example, the World Bank also provides Secretariat support for the Consultative Group on International Agricultural Research (CGIAR), a partnership of countries, international and regional organizations, and private foundations supporting the work of 15 international agricultural research centers, whose work has played an important role in improving agricultural productivity and reducing hunger in the developing countries. Together, the World Bank, IFAD, and the African Development Bank account for about 73 percent of multilateral ODA to agriculture for Africa from 1974 to 2006. In addition, the New York-based UNDP is responsible for supporting the implementation of the MDG targets and houses the UN MDG Support Team. Bilateral donors, including the United States: The major bilateral donors have focused on issues of importance to Africa at every Group of Eight (G8) summit since the late 1990s. In 2005, these donors reiterated their commitment to focus on Africa as the only continent not on track to meet the MDG targets by 2015 and further committed themselves to supporting a comprehensive set of actions to raise agricultural productivity, strengthen urban-rural linkages, and empower the poor, based on national initiatives and in cooperation with NEPAD, CAADP, and other African initiatives. At that time, the commitments of the G8 and other donors were expected to lead to an increase in ODA to Africa of $25 billion a year by 2010, more than twice the amount provided in 2004. (See app. V for a summary discussion of the role of other development partners, such as NGOs and private foundations.) In the wake of the 1996 WFS, the United States adopted a number of development initiatives for Africa. These initiatives—including the Africa Food Security Initiative in 1998, the Africa Seeds of Hope Act in 1998, and the African Growth and Opportunity Act of 2000—reflect U.S. efforts to improve the deteriorating food security situation in sub-Saharan Africa. The consistent U.S. positions at the summit were that the primary responsibility for reducing food insecurity rests with the host governments, and that it is critical that all countries promote self-reliance and facilitate food security at all levels. (See app. II for a summary of U.S. participation in the 1996 summit.) In 2002, the United States launched IEHA, which represents the U.S. strategy to help fulfill the MDG of halving hunger in Africa by 2015. In 2005, USAID, the primary agency that implements IEHA, committed to providing an estimated $200 million per year for 5 years through the initiative, using existing funds from Title II of Public Law 480 food for development and assorted USAID Development Assistance and other accounts. IEHA is intended to build an African-led partnership to cut hunger and poverty by investing in efforts to promote agricultural growth that is market-oriented and focused on small-scale farmers. IEHA is currently implemented in three regional missions in Africa as well as in eight bilateral missions: Kenya, Tanzania, and Uganda in East Africa; Malawi, Mozambique, and Zambia in southern Africa; and Ghana and Mali in West Africa. Low agricultural productivity, limited rural development, government policy disincentives, and poor health are among the main factors contributing to persistent food insecurity in sub-Saharan Africa. Additional factors, including rising global commodity prices and climate change, will likely further exacerbate food insecurity in the region (see fig. 3). (For further discussions of factors and interventions affecting food security, including a framework for addressing food security issues, see table 2 in app. III. Additional examples of the interventions, as well as the summary results of our structured panel discussions with donors and NGOs during fieldwork, are discussed in app. IV.) One of the most important factors that contribute to food insecurity in sub-Saharan Africa is its low agricultural productivity. Raising agricultural productivity is vital to all elements of food security: food availability, food access, and food utilization. Although imports can be used to supplement domestic agricultural production in some countries, importing staple foods may not be practical because some main staples, such as cassava, are generally not traded in the international market. In addition, poor infrastructure in many African countries makes it extremely costly to transport imported foods to remote areas. Furthermore, because the income of the majority of people in developing countries depends directly or indirectly on agriculture, growth in this sector would have widespread poverty-reducing benefits and improve food access for the poor. The World Bank pointed out in its 2008 World Development Report that agriculture’s ability to generate income for the poor, particularly for women, is more important for food security than its ability to increase local food supplies. According to FAO, poverty is a main immediate cause of food insecurity in sub-Saharan Africa. Agriculture can also help enhance diet quality and diversity through new and improved crop varieties, thereby improving food utilization and nutritional status. Sub-Saharan Africa has lagged behind other developing countries in improving agricultural productivity. Since the early 1960s, grain yield in the rest of the world has increased almost 2.5 percent annually (see fig. 4). In contrast, grain yield in sub-Saharan Africa has stagnated, with an annual increase of only approximately 1 percent. As a result, yield of basic food staples in sub-Saharan Africa, such as maize, is much lower than that of other countries. For example, Zambia produces about 1,800 kilograms of maize on a hectare of land, while China produces almost 3 times as much on the same amount of land. Overall, the gap between the average grain yield in sub-Saharan Africa compared with the rest of the world’s developing countries has widened over the years. By 2006, the average grain yield in sub-Saharan Africa was only about 40 percent of the rest of the world’s developing countries. Research has also shown that the expansion of food production has taken a very different course in Asia than in sub-Saharan Africa, where increases in food staples were achieved largely by expanding the area cultivated, not by increasing the yield on existing acreage. Low agricultural productivity growth in sub-Saharan Africa is partially due to inadequate investment and the limited use of modern inputs and farming practice. Panelists in all four countries we visited reported difficulty in accessing critical inputs, such as land, seed, fertilizer, and water, due to their high costs and limited availability. The panelists also noted that farm management practices were weak in all four countries. FAO data show that the investment per hectare of land in sub-Saharan Africa is about one third of the world’s average. Less than 1 percent of the agricultural land in sub-Saharan Africa is irrigated, thereby making agricultural production prone to natural disasters, such as droughts. Sub- Saharan Africa uses far less inputs, such as fertilizer and pesticide, than other parts of the world. For example, its pesticide use is only about 5 percent of the world’s average, which was 0.39 kilograms per hectare in 1998 to 2000 (see table 1). The World Bank reports that while scientific plant breeding has improved agricultural production throughout much of the world, sub-Saharan Africa lags behind in adoption of these new varieties. For example, while at least 80 percent of the crop area in Asia was planted with improved varieties of rice, maize, sorghum, and potatoes, only about 20 percent to 40 percent of the crop area in sub-Saharan Africa used new varieties in these categories. According to several USAID officials, agricultural productivity has also lagged in sub-Saharan Africa, in part because innovations in science and technologies, such as improved seed and soil fertility systems, have not been transferred and adapted to each country’s unique agro-ecosystem. Limited rural development has also been a primary factor aggravating food insecurity in sub-Saharan Africa. The majority of the population, as well as the majority of the poor, lives in the rural areas of the region. Weak rural infrastructure and lack of rural investment, among other factors, limit the potential for agricultural development and opportunities for nonfarm income. Panels in all four countries we visited cited poor infrastructure and farmers’ lack of access to microcredit as challenges. Rural development in sub-Saharan Africa has suffered from weak infrastructure, such as lack of rural telecommunications, electricity, and roads. Although the development community has recognized the importance of improving rural infrastructure for poverty reduction and agricultural growth, infrastructure in the region is generally in a frail condition. For example, IFPRI reported that progress in paved roads is almost nonexistent in sub-Saharan Africa, and the World Bank reported that less than half of the rural population in this region lives next to an all- season road. The lack of adequate rural roads increases distribution costs, adds to postharvest food spoilage, and inhibits the development of local and regional markets as well as access to those markets. Many rural households also do not have access to safe drinking water, electricity, modern communication services, or good transportation. For example, in Burkina Faso, Uganda, and Zambia, walking is the principal means of transportation for 87 percent of rural residents. IFPRI concluded that it is the poor households within the rural areas that have the least access to infrastructure. Farmers’ lack of access to credit also hinders rural development. The World Bank noted that almost all countries in Africa have a large unmet demand for agricultural credit and rural finance. With inadequate financing in the short term, farmers find it difficult to buy inputs and seeds. In the long term, they are unable to invest in land improvement, better technology, or irrigation development. The International Monetary Fund (IMF) noted that rural credit in sub-Saharan Africa is hampered by land tenure systems that prevent the use of land as collateral, the absence of physical collateral, the high risk associated with rain-fed agriculture and sharp commodity price fluctuations, and poor transport and communication facilities. Banks that specialize in agricultural lending have become insolvent in many sub-Saharan African countries, or have had to be rescued at large public cost, with many of these banks collapsing through the 1980s. Each of the panels we conducted in the four countries we visited cited weak governance or deficient agricultural policies as challenges, with one panelist noting that government policies can be a disincentive to agricultural growth. These policies can have a detrimental impact on the rural poor. While Asia has fostered growth in agriculture by providing credit to support prices and input subsidies to farmers, sub-Saharan African governments have taxed agriculture more than the governments of other regions. For example, according to the government of Tanzania’s 2007/2008 Agricultural Sector Review, Tanzanian farmers must pay about 55 taxes, levies, and fees to sell their agricultural products, which is equivalent to 50 percent of the products’ price. The World Bank noted that efforts by local governments to raise local revenue in Tanzania have occasionally added a significant tax burden to agriculture, with little benefit. A World Bank study found that of the 18 countries studied, the 3 with the highest tax rates on the agricultural sector were all in sub- Saharan Africa—Côte d’Ivoire (49 percent), Ghana (60 percent), and Zambia (46 percent). While progress has been made over the past two decades by numerous developing countries in reducing these policy biases, many welfare- and trade-reducing price distortions remain. These policies continue to provide disincentives for agricultural development and investment. Other government policies, such as subsidies to agriculture, if used improperly, can also negatively affect agriculture and food security. For example, a World Bank report notes that the government of Zambia’s policy of subsidizing smallholders’ maize production has had a number of long-term effects, including a loss of farmers’ skills and knowledge and increased dietary concentration on subsidized maize meal among Zambian people. We met with officials in Zambia who also expressed concern that Zambian maize subsidies led to overreliance on maize meal for nutrition and underreliance on other sources of food, such as vegetables. Poor health also exacerbates food insecurity in sub-Saharan Africa, according to panels in the four countries we visited, through its adverse impact on the agricultural workforce. For example, HIV has taken a heavy toll on the population and agricultural production of sub-Saharan Africa, because two thirds of those in the world who have HIV live in that region. HIV is concentrated in the most economically productive groups, those aged 15 to 45 years, with slightly more women infected than men. UNDP noted that more than one quarter of Africans are directly affected by the HIV epidemic. HIV/acquired immunodeficiency syndrome (AIDS) has a profound impact on poverty by reducing adults’ capability to work and raising mortality among young adults. In addition, malaria kills over 1 million people each year, according to the World Health Organization (WHO), mostly in Africa. The World Bank notes that there is a two-way relationship between malaria and agriculture. Specifically, on one hand, when farmers become ill or die from malaria, agricultural production decreases because of lost labor, knowledge, and assets. On the other hand, some methods that farmers use to increase agricultural production, such as increased irrigation, can increase the risk of malaria by increasing the population of mosquitoes. Furthermore, WHO estimates that there were 14.4 million cases of tuberculosis worldwide in 2006, and that Africa has the highest incidence of the disease—363 cases per 100,000 people. Tuberculosis spreads particularly rapidly in areas with high concentrations of livestock. Global prices for fuel and agricultural commodities have been rising significantly due to various factors, further exacerbating food insecurity. From 2000 to 2008, oil prices are estimated to increase by 238 percent, grain prices by 175 percent, and vegetable oil prices by 184 percent (see fig. 5). The growing use of agricultural products, such as soybeans and corn, for biofuels has raised the price of these commodities and reduced the amount of land available for production of other food commodities. (See app. VI for further discussion of biofuels and their impacts on food security.) Economic growth in large countries, such as China and India, has also raised demand for food—through both increased incomes and shifting dietary patterns. Droughts in major grain-producing countries, such as Australia, and record-low grain reserves have further constrained world supplies and increased the prices of agricultural goods. Experts suggest that rising fuel and commodity prices are negatively impacting African food security efforts through several channels, as follows: Higher fuel prices increase the prices of fertilizer and other inputs for farmers and make harvesting, storage, and transportation of agricultural production more expensive. Higher fuel import costs also limit available foreign exchange for imports of food. USDA reports that official development assistance has fallen well short of rising energy import bills. Twenty-two countries—15 of which are in sub-Saharan Africa—depend on imported fuel, import grain, and report a prevalence of undernourishment exceeding 30 percent, according to FAO. Higher agricultural prices hurt many of Africa’s food-insecure, including low-income consumers who spend a large share of their income on grains and farmers who buy more food than they produce. Food-insecure populations are likely to be net buyers of food, and many sub-Saharan African countries are, in fact, net importers of food. In February 2008, FAO announced that 21 African countries are in crisis as a result, in part, of higher food prices, while nutritional studies estimate that 16 million additional people would be affected by food insecurity for every 1 percent increase in staple food prices, with many of these people being in Africa. In the long term, while higher grain prices provide incentives to expand agricultural production, complementary policies and investments in technology and market development may be required. Higher fuel and commodity prices increase delivery costs for emergency food aid programs to Africa’s most food-insecure. For the largest U.S. emergency food aid program, USAID has reported that commodity costs increased by 41 percent and transportation costs increased by 26 percent in the first half of fiscal year 2008. As a result, USAID projects a $265 million shortfall in this year’s food aid budget. According to our estimates, that $265 million could provide enough food aid to reach about 4.5 million vulnerable people in sub-Saharan Africa during a typical peak hungry season lasting 3 months. Similarly, in March 2008, WFP appealed to the international community, including the United States, to compensate for the growing shortfall in its food aid budget. Climate change is also an important emerging challenge that is expected to worsen African food insecurity. Key climate change models conclude that global warming has occurred and, since the mid-twentieth century, has been largely attributable to human activities, such as the burning of fossil fuels and deforestation. Several models predict further global warming, changed precipitation patterns, and increased frequency and severity of damaging weather-related events for this century. IFPRI reports that sub- Saharan Africa may be hardest hit by climate change, with one estimate predicting that temperature increases for certain areas may double those of the global average. Since sub-Saharan African countries have a lower capacity to adapt to variable weather, models also predict that climate change will further reduce African agricultural yields and will increase the number of people at risk of hunger. Climate change affects agriculture in several ways: higher temperatures shorten the growing season and adversely affect grain formation; reduced precipitation levels limit the availability of water to grow rain-fed crops; variable climates shift production to marginal lands and intensify soil erosion; rising sea levels threaten coastal agricultural land; and climate extremes, such as floods and droughts, result in crop failure and livestock deaths. Accounting for these effects, numerous studies seek to estimate the impact of climate change on African agricultural yields. By 2060, for example, the United Nations Environment Program projects a 33 percent reduction in grain yield in sub-Saharan Africa, while FAO predicts that the number of Africans at risk of hunger will increase to 415 million. (For further discussion of climate change, see app. VI, which also includes a compendium of the results of several studies that project adverse impacts from climate change on African agriculture.) Despite their commitment to halve hunger in sub-Saharan Africa by 2015, efforts of host governments and donors, including the United States, to accelerate progress toward that goal have been insufficient. First, host governments have not prioritized food security as a development goal, and few have met their 2003 pledge to direct 10 percent of government spending to agriculture. Second, donors reduced the priority given to agriculture, and their efforts have been hampered by difficulties in coordination and deficiencies in estimates of undernourishment used to measure progress toward attaining the goals to halve hunger by 2015. Third, limited agricultural development resources, increased demand for emergency food aid, and a fragmented approach impair U.S. efforts to end hunger in sub-Saharan Africa. Host government efforts in sub-Saharan Africa have been hampered by limited prioritization of food security in poverty reduction strategies and slow follow-through on CAADP goals, low agricultural spending levels, and weak capacity of government institutions to sustain food security interventions and to report on progress toward goals to halve hunger by 2015. Despite their commitment in the November 1996 Rome Declaration on World Food Security and the World Food Summit Plan of Action to achieve food security for all, some host governments have not prioritized food security in their strategies and use of resources. An FAO- commissioned review of the PRSP process found a lack of consistency among policies, strategies, and interventions for alleviating food insecurity and poverty. Developing countries prepare a PRSP every 3 to 5 years through a participatory process with civil society and donors. As country-owned documents that establish development priorities and serve as the basis for assistance from the World Bank and other donors, PRSPs are to include a country poverty assessment and clearly present the priorities for macroeconomic, structural, and social policies. Of 10 African PRSPs reviewed in the FAO-commissioned review, only half included policies to address food insecurity and less than half included interventions to address food insecurity. Furthermore, several delegates who attended the 2004 Committee on World Food Security meeting expressed concern that food security and rural development issues were not adequately reflected in PRSPs of many countries. Similarly, our analysis of World Bank and IMF joint assessments of current PRSPs for eight countries in East Africa and southern Africa found that food security and agricultural development require greater prioritization in more than half of the strategies examined. Although African leaders pledged their commitment to prioritize agricultural development in the CAADP framework, both the initial planning process and the actual implementation of the CAADP framework at the country level have been slow. According to a World Bank official, CAADP’s initial planning process did not begin until 2005, 2 years after the framework was developed, because it involved (1) forming stakeholder groups at the regional and continental levels and (2) establishing credibility within the development community. Thus, country-level implementation did not start until 2007. Regional entities representing 40 countries in East Africa, West Africa, and southern Africa have continued to encourage the implementation and acceleration of CAADP. However, by the end of 2008, only 13 of the 40 countries are expected to have completed the initial planning process and organized a roundtable to formally adopt a CAADP compact. The remaining 27 countries are scheduled to complete the entire process by the summer of 2009. However, for those countries that will formally adopt a CAADP compact, it is unclear whether concrete results will follow. According to an IFPRI official, because CAADP is still in the early stages of implementation, it is difficult to demonstrate the impact of CAADP efforts to date. Although African leaders in 2003 pledged to devote 10 percent of government spending on agriculture, according to an IFPRI study issued in 2008, most countries in Africa—with the exception of four countries: Ethiopia, Malawi, Mali, and Burkina Faso—had not reached this goal as of 2005. Of the four countries we reviewed—Kenya, Mozambique, Tanzania, and Zambia—none had met the goal as of 2005. Mozambique was close to reaching the goal, and government spending for agriculture in Zambia has shown an upward trend since 2002. However, as shown in figure 6, government spending for agriculture in Kenya and Tanzania from 2002 to 2005 was well below the CAADP goal. According to estimates by several research organizations, the total financial investment required for agricultural development and to halve hunger in sub-Saharan Africa by 2015 is significant, and experts conclude that the majority of African countries will need to substantially scale up spending for their agricultural sectors. IFPRI estimated that annual investments of $32 billion to $39 billion per year would be required for agriculture in sub-Saharan Africa, more than 3 to 4 times the level in 2004. Specifically, Kenya’s spending would need to increase by up to 12 times its 2004 levels; Mozambique spending would need to double; Tanzania would need to triple its 2004 spending levels; and Zambia would need to spend up to 9 times its 2004 total. (See fig. 6 for a comparison of actual 2004 agricultural sector spending and the annual agricultural sector spending required under different scenarios to halve hunger by 2015 in Kenya, Mozambique, Tanzania, and Zambia.) Host governments’ institutional capacity affects whether they can eventually take over development activities at the conclusion of donor assistance, and some lack the capacity to sustain donor-assisted food security interventions over time. In a 2007 review of World Bank assistance to the agricultural sector in Africa, the World Bank Independent Evaluation Group reported that only 40 percent of the bank’s agriculture- related projects in sub-Saharan Africa had been sustainable, compared with 53 percent for its projects in other sectors. For example, the World Bank found the expected sustainability of two agriculture projects in Tanzania to be unrealistic, given the government’s limited capacity to generate the projected public sector resources. Similarly, IFAD maintains that sustainability remains one of the most challenging areas that require priority attention. An annual report, issued by IFAD’s independent Office of Evaluation, on the results and impact of IFAD operations between 2002 and 2006 rated 45 percent of its agricultural development projects satisfactory for sustainability. Donors’ exit strategies vary depending on host governments’ capacity to continue their assistance activities. For some sub-Saharan African countries, the handover may be progressive—that is, a relevant government ministry gradually takes over the responsibilities of certain food security interventions in specific geographic regions as the government’s capacity improves. For example, because the government of Lesotho currently lacks the capacity to run the WFP-funded school-feeding program throughout the country, WFP has targeted schools in remote, inaccessible mountainous areas and expects to hand over full responsibility to the government by 2010. Political instability can also impact the sustainability of food security, even when the handover is expected to be successful. For example, although the director of the UN Millennium Village in Sauri, Kenya, has been relying on effective coordination with several Kenyan government ministries to enable the village to continue its operations after the UN’s departure, recent postelection turmoil in the country has raised uncertainties about the project’s long-term sustainability. All participating governments and international organizations agreed to submit a biannual national progress report to FAO’s Committee on World Food Security on the implementation of the WFS Plan of Action. However, many governments have not submitted reports, and the quality of the reports that have been submitted has varied. Successful reporting requires a lengthy consultation process with government officials and other stakeholders to answer several questions about indicators of progress that cover 7 commitments and 27 objectives. To make the process easier, FAO revised its reporting requirements in 2004, but the reporting rate has remained low. In 2006, the last time that the reports were due, only 79 member states and organizations, such as the World Bank and WFP, had submitted progress reports on the WFS Plan of Action to FAO’s Committee on World Food Security, according to FAO. Of these 79 member states and organizations, only 17 were from sub-Saharan Africa. FAO cited the limited capacity of government institutions as one of the main reasons for low reporting rates on progress toward hunger targets. According to FAO, government officials working within ministries of agriculture are responsible for reporting on their country’s national food security action plan. However, some government ministries that are responsible for reporting lack the capacity to prepare a comprehensive report on all seven commitments because they do not have the support they require from other domestic institutions and agencies. According to FAO, the poor quality and inconsistency of the national progress reports have not allowed FAO to draw general substantive conclusions. While most national progress reports provide information on policies, programs, and actions being taken to reduce undernourishment, few of the reports provide information on the actual results of actions taken to reduce the number of undernourished people. In addition, the content of the reports varies. Specifically, some countries either (1) provide only selective information on certain aspects of food security that they consider most relevant, such as food stocks or reserve policies; (2) provide variable emphasis on past, ongoing, and future food security plans and programs; (3) focus on irrelevant issues; or (4) provide more description than analysis. Despite these concerns, providing feedback or critical assessments on the submitted reports is beyond the mandate and the staff capacity of the Committee on World Food Security Secretariat, according to FAO officials. As a result, the usefulness of the information submitted and the potential to improve the quality of reporting are limited. FAO officials acknowledged these limitations and the usefulness of the information submitted for monitoring and is investigating ways to improve the WFS monitoring process. For some sub-Saharan Africa countries, a large portion of food security assistance comes from multilateral and bilateral donors through ODA provided to the country’s agriculture sector. However, the share of multilateral and bilateral ODA provided to agriculture for Africa has declined steadily since peaking in the 1980s. Specifically, ODA data show that the worldwide share of ODA to the agricultural sector for Africa has significantly declined, from about 15 percent in the early 1980s to about 4 percent in 2006. According to a World Bank official, in the 1980s, the bank directed considerable funding toward agricultural development programs in sub-Saharan Africa that ultimately proved unsustainable. In the 1990s, the World Bank prioritized health and sanitation programs in the region over agricultural development programs. By 2005, the bank had started shifting its priorities back to African agricultural development, investing approximately $500 million per year in the sector. Bank officials expect that total to increase by 30 percent by the end of 2008. According to the UN, the international community needs to increase external financing for African agriculture from the current $1 to $2 billion per year to about $8 billion by 2010. Figure 7 shows the overall declining trend of multilateral and bilateral ODA to agriculture for Africa and the percentages of bilateral and multilateral donor contributions from 1974 to 2006. The decline of donor support to agriculture in Africa is due to competing priorities for funding and a lack of results from past unsuccessful interventions. According to the 2008 World Development Report, many of the large-scale integrated rural development interventions promoted heavily by the World Bank suffered from mismanagement and weak governance and did not produce the claimed benefits. In the 1990s, donors started prioritizing social sectors, such as health and education, over agriculture. For example, one of the United States’ top priorities for development assistance is the treatment, prevention, and care of HIV/AIDS through the President’s Emergency Plan for AIDS Relief, which is receiving billions of dollars every year. The increasing number of emergencies and response required from international donors has also diverted ODA that could have been spent on agricultural development. (See fig. 8 for the increasing trend of ODA to Africa for emergencies compared with ODA to agriculture for Africa.) Donor and NGO panels that we convened in the four countries we visited—Kenya, Mozambique, Tanzania, and Zambia—reported a general lack of donor coordination as a challenge, despite efforts to better align donor support with national development priorities, such as those that the international community agreed upon in the Paris Declaration on Aid Effectiveness in March 2005. Improved donor coordination was recommended seven times in four panels that we convened during our fieldwork. Coordination of agricultural development programs has been difficult at the country level due, in part, to the large number of simultaneous agricultural development projects that have not been adequately aligned. According to the 2008 World Development Report, in Ethiopia, almost 20 donors were supporting more than 100 agriculture projects in 2005. Similarly, government efforts in Tanzania have been fragmented among some 17 multilateral and bilateral donors in agriculture. A study of the United Kingdom National Audit Office reported that British country teams are not sure about specific activities, geographical focus, and donors’ comparative advantage due, in part, to the large number of donors and projects ongoing at the country level. In addition, bilateral donor assistance is often not adequately aligned with the strategies and programs of international financial institutions and private foundations. Specifically, according to the UN Millennium Project, UN agencies are frequently not well-linked to the local activities of the large financial institutions and regional development banks that tend to have the most access in advising a government, since they provide the greatest resources. The World Bank in its 2008 World Development Report was critical of the lack of complementary investments made by other donors at different stages of the food production and supply process. In an attempt to address inadequate division of labor among donors, the UN agencies have established new coordination mechanisms. In September 2007, the UN Secretary-General first convened the UN MDG Africa Steering Group to identify strategic ways in which the international community could better coordinate and support national governments’ implementation of MDG programs, including the implementation of agriculture and food security. The steering group met again in March 2008, where it identified the unpredictability of aid, poor alignment with country systems, and inadequate division of labor among donors as major challenges to African food security. The group expects to publish its recommendations for achieving MDGs in Africa by the end of May 2008. In addition, the UN has recently established the One UN initiative at the country level to facilitate coordination. The purpose of this initiative is to shift from several individual agency programs to a single UN program in each country with specific focus areas, one of which could be food security. Two countries we visited—Tanzania and Mozambique—were among the eight countries worldwide to pilot the One UN initiative in 2007 and 2008. In addition, to accelerate progress toward MDGs— particularly MDG-1—WFP, FAO, and IFAD recently agreed to establish joint Food Security Theme Groups at the country level. The main purpose of these groups is to enhance interagency collaboration and coordination to support countries’ development efforts in the areas of food security, agriculture, and rural development. Between June 2007 and August 2007, a review of the status of the Food Security Theme Groups showed that they are present in 55 countries (29 in sub-Saharan Africa). However, according to the UN Millennium Project, efforts through UN country teams are more of a forum for dialogue, rather than a vehicle for real coordination. It is difficult to accurately assess progress toward the hunger goals because of deficiencies in FAO’s estimates of undernourishment, which are considered the authoritative statistics on food security. These deficiencies stem from methodological weaknesses and poor data quality and reliability, as follows: Weaknesses in methodology: FAO’s methodology has been criticized on several grounds. First, FAO relies on total calories available from food supplies and ignores dietary deficiencies that can occur due to the lack of adequate amounts of protein and essential micronutrients. Second, FAO underestimates per capita food availability in Africa, and, according to several FAO officials in Rome, coverage of noncereal crops, such as cassava—a main staple food for sub-Saharan Africa—has been inadequate. Third, FAO estimates are more subject to changes in the availability of food and less so to changes in the distribution of food, which leads to the underestimation of undernourishment in regions with relatively better food availability but relatively worse distribution of food, such as South Asia. Even when food is available, poor people may not have access to it, which leads to undernourishment. Lastly, FAO relies on food consumption data from outdated household surveys to measure inequality in food distribution. According to FAO, some of these surveys are over 10 years old. Poor data quality and reliability: According to FAO officials, the quality and reliability of food production, trade, and population data, which FAO relies on for its estimates of undernourishment, vary from country to country. For many developing countries, the data are either inaccurate or incomplete, which directly impacts FAO’s final estimate of undernourishment. For example, FAO officials told us that the estimated prevalence of undernourishment in Myanmar was 5 percent, but the officials questioned the reliability and accuracy of the data reported by the government of Myanmar. In addition, FAO lacks estimates of undernourishment for some countries to which a substantial amount of food aid has been delivered, such as Afghanistan, Iraq, and Somalia. Since data on production, trade, and consumption of food in some countries are not available, FAO makes one undernourishment estimate for these countries as a group and takes this estimate into account to determine total undernourishment worldwide. Furthermore, FAO’s undernourishment estimates are outdated, with its most recent published estimates covering the 3-year period of 2001 to 2003. In 2007, FAO suspended publication of The State of Food Insecurity in the World (SOFI) report, which it had been issuing annually since 1999. FAO also did not submit hunger data for the UN Millennium Development Report in 2006, and, according to an official from the UN Statistics Division, FAO is unlikely to do so for 2007 as well. FAO did not publish the 2007 SOFI report or contribute data for the Millennium Development Report because it is presently revising the minimum caloric requirements, a key component in FAO’s methodology for estimating undernourishment to measure progress toward the 2015 hunger goals. FAO has acknowledged that it needs to improve its methodology and consider other indicators to accurately portray progress toward hunger targets. As part of this effort, FAO sponsored an “International Scientific Symposium” in 2002 for scientists and practitioners to discuss various measures and assessment methods on food deprivation and undernourishment. According to FAO, efforts to improve food security and nutrition measures are a continuous activity of the agency, which has also been involved in strengthening data collection and reporting capacity at the regional and country levels. FAO is also developing a new set of indicators for measuring food security and nutrition status. In recent years, the levels of USAID funding for development in sub-Saharan Africa have not changed significantly compared with the substantial increase in funding for emergencies (see fig. 9). Funding for the emergency portion of Title II of Public Law 480—the largest U.S. food aid program—has increased from about 70 percent a decade ago to over 85 percent in recent years. After rising slightly from 2003 to 2005, the development portion of USAID’S food aid funding fell below the 2003 level in 2006 and 2007. While emergency food aid has been crucial in helping to alleviate the growing number of food crises, it does not address the underlying factors that contributed to the recurrence and severity of these crises. Despite repeated attempts from 2003 to 2005, the former Administrator of USAID was unsuccessful in significantly increasing long-term agricultural development funding in the face of increased emergency needs and other priorities. Specifically, USAID and several other officials noted that budget restrictions and other priorities, such as health and education, have limited the U.S. government’s ability to fund long-term agricultural development programs in sub-Saharan Africa. The United States, consistent with other multilateral and bilateral donors, has steadily reduced its ODA to agriculture for Africa since the late 1980s, from about $500 million in 1988 to less than $100 million in 2006 (see fig. 10). The U.S. Presidential Initiative to End Hunger in Africa (IEHA)—the principal U.S. strategy to meet its commitment toward halving hunger in sub-Saharan Africa—has undertaken a variety of efforts that, according to USAID officials, aim to increase rural income by improving agricultural productivity, increasing agricultural trade, and advancing a favorable policy environment, including building partnerships with donors and African leaders. However, USAID officials acknowledged that IEHA lacks a political mandate to align the U.S. government food aid, emergency, and development agendas to address the root causes of food insecurity. Despite purporting to be a governmentwide presidential strategy, IEHA is limited to only some of USAID’s agricultural development activities and does not integrate with other agencies in terms of plans, programs, resources, and activities to address food insecurity in Africa. For example, because only eight USAID missions have fully committed to IEHA and the rest of the missions have not attributed funding to the initiative, USAID has been unable to leverage all of the agricultural development funding it provides to end hunger in Africa. This lack of a comprehensive strategy has likely led to missed opportunities to leverage expertise and minimize overlap and duplication. Our meetings with officials of other agencies demonstrated that there was no significant effort to coordinate their food security programs. A U.S. interagency working group that had attempted to address food security issues since the mid-1990s disbanded in 2003. In April 2008, USAID established a new Food Security and Food Price Increase Task Force, but it is not a governmentwide interagency working group. Although both MCC and USDA are making efforts to address agriculture and food insecurity in sub-Saharan Africa, IEHA’s decision-making process does not take these efforts into consideration. In addition, IEHA does not leverage the full extent of the United States’ assistance to African agriculture through its contributions to multilateral organizations and international financial institutions, which are managed by State and Treasury. Some of the U.S. agencies’ plans and programs for addressing food insecurity in Africa involve significant amounts of assistance. For example, as of June 2007, MCC had committed $1.5 billion for multiyear compacts in sub-Saharan Africa, of which $605 million (39 percent) was for agriculture and rural development programs and another $575 million (37 percent) was for transportation and other infrastructure. Only recently, USAID has provided MCC with assistance in the development and implementation of country compacts. USDA, which administers several food aid programs, also administers a wide range of agricultural technical assistance, training, and research programs in sub-Saharan Africa to support the African Growth and Opportunity Act, NEPAD/CAADP, and the regional economic organizations. However, according to USAID Mission officials in Zambia, coordination difficulties arise when U.S.-based officials from other government agencies, such as USDA, plan and implement food security projects at the country level with little or no consultation with the U.S. Mission staff. Most donors, including the United States, have committed to halving global hunger by 2015, but meeting this goal in sub-Saharan Africa is increasingly unlikely. Although host governments and donors share responsibility for this failure, especially with regard to devoting resources to support sub-Saharan Africa’s agricultural sector, host governments play a primary role in reducing hunger in their own countries. Without adequate efforts by the host governments coupled with sufficient donor support, it is difficult to break the cycle of low agricultural productivity, high poverty, and food insecurity that has contributed to an increase in emergency needs. The United States’ approach to addressing food insecurity has traditionally relied on the U.S. food aid programs. However, in recent years, the resources of these programs have focused on the rising number of acute food and humanitarian emergencies, to the detriment of actions designed to address the fundamental causes of these emergencies, such as low agricultural productivity. Moreover, IEHA does not comprehensively address the underlying causes of food insecurity, nor does it leverage the full extent of U.S. assistance to sub-Saharan Africa. Consequently, the U.S. approach does not constitute an integrated governmentwide food security strategy. In implementing its food security efforts, the United States has not adequately collaborated with host governments and other donors, which has contributed to further fragmentation of these efforts. Finally, without reliable data on the nature and extent of hunger, it is difficult to target appropriate interventions to the most vulnerable populations and to monitor and evaluate their effectiveness. Sustained progress in reducing sub-Saharan Africa’s persistent food insecurity will require concerted efforts by host governments and donors, including the United States, in all of these areas. To enhance efforts to address global food insecurity and accelerate progress toward halving world hunger by 2015, particularly in sub-Saharan Africa, we recommend that the Administrator of USAID take the following two actions: work in collaboration with the Secretaries of State, Agriculture, and the Treasury to develop an integrated governmentwide U.S. strategy that defines each agency’s actions and resource commitments toward achieving food security in sub-Saharan Africa, including improving collaboration with host governments and other donors and developing improved measures to monitor and evaluate progress toward the implementation of this strategy, and prepare and submit, as part of the annual U.S. International Food Assistance Report, an annual report to Congress on progress toward the implementation of the first recommendation. USAID and the Departments of Agriculture and State provided written comments on a draft of our report. We have reprinted these agencies’ comments in appendixes VII, VIII, and IX, respectively, along with our responses to specific points. In addition to these agencies, several other entities—including MCC, Treasury, FAO, IFAD, IFPRI, UNDP, and WFP— provided technical comments on a draft of our report, which we have incorporated as appropriate. USAID concurred with our first recommendation—noting that the responsibility for halving hunger by 2015 lies with the respective countries while mentioning activities that the United States, through efforts such as IEHA, and the international community are undertaking to address the issue of food security. However, USAID expressed concern with our conclusion that the shift in its focus from emergency food aid to long-term agricultural development has not been successful. We recognize the challenges of addressing an increasing number of emergencies within tight resource constraints. However, it is equally important to recognize that addressing emergencies—to the detriment of long-term agricultural development—does not break the cycle of low agricultural productivity, high poverty, and food insecurity that has persisted in many sub-Saharan African countries. Regarding our second recommendation, USAID asserted that the International Food Assistance Report (IFAR) is not the appropriate vehicle for reporting on progress on the implementation of our first recommendation. USAID suggested that a report such as the annual progress report on IEHA (which is not congressionally required) would be more appropriate. We disagree. We believe that the congressionally required annual IFAR, in fact, would be an appropriate vehicle for reporting on USAID’s and other U.S. agencies’ implementation of our first recommendation. Public Law 480, section 407(f) (codified at 7 U.S.C. 1736a(f)) requires that the President prepare an annual report that “shall include. . .an assessment of the progress toward achieving food security in each country receiving food assistance from the United States Government.” This report is intended to contain a discussion of food security efforts by U.S. agencies. In addition, USDA stated that our report was timely and provided useful information and recommendations. Noting its participation in an interagency food aid policy coordinating process, USDA reaffirmed its commitment to using its full range of authorities and programs to address the need for and improve the effectiveness of global food assistance and development. Although we recognize that an interagency Food Assistance Policy Council provides a forum for the discussion and coordination of U.S. food aid programs, a similar forum to address food security issues had not been established until May 2008 following the release of a draft of this report. Finally, although USDA administers food assistance programs, including food aid programs for development, we note that these are not included in IEHA. State identified additional issues for consideration, which we have addressed as appropriate. Specifically, State disagreed with our statement that U.S. agencies had made no significant effort to coordinate their food security programs, citing its ongoing coordination with USAID and USDA on food security issues. For example, State indicated that several of its offices and bureaus—such as as the Office of the Director of Foreign Assistance; the Bureaus of Population, Refugees, and Migration; Economic, Energy, and Business Affairs; African Affairs; International Organization Affairs, and others—work closely with USAID and USDA to coordinate food security issues. However, as we noted in this report, these efforts, to date, have been focused primarily on food aid, as opposed to food security, and there is no comprehensive U.S. governmentwide strategy for addressing food insecurity in sub-Saharan Africa. Treasury generally concurred with our findings and provided additional comments for consideration, which we have addressed as appropriate. We are sending copies of this report to interested Members of Congress; the Administrator of USAID; and the Secretaries of Agriculture, State, and the Treasury. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. Our objectives were to examine (1) factors that contributed to persistent food insecurity in sub-Saharan Africa and (2) the extent to which host governments and donors, including the United States, are working toward halving hunger in the region by 2015. To examine factors that have contributed to continued food insecurity in sub-Saharan Africa, we relied on the United Nations (UN) Food and Agriculture Organization’s (FAO) estimates on the number of undernourished people, and the prevalence of undernourishment, which is one of two progress indicators in the Millennium Development Goals (MDG) target of halving hunger, to illustrate the lack of progress in reducing hunger in sub-Saharan Africa as compared with other parts of the developing world. Although we recognize the limitations of FAO’s estimates (such as the lack of up-to-date information), they are the official basis of the World Food Summit (WFS) and MDG targets and are largely consistent with the trends reported by other sources, such as the U.S. Department of Agriculture’s (USDA) estimates on global hunger. We discussed the reliability of FAO’s undernourishment data with several cognizant FAO officials and various U.S. government officials in Washington and in sub-Saharan Africa. We determined that these estimates are sufficiently reliable for our purpose, which is to show overall trends over time at the aggregate level. We also analyzed FAO’s data on input use, grain production, and grain planting areas to compare agricultural input use and productivity in sub-Saharan Africa with that of other parts of the world. We determined that these data are sufficiently reliable for our purposes. To assess the reliability of the International Monetary Fund (IMF) data on commodity prices, we reviewed (1) existing documentation related to the data sources and (2) documents from other agencies reporting on commodity prices and found collaborating support. Accordingly, we determined that the data were sufficiently reliable for the purposes of this report. We selected four countries for fieldwork—Kenya and Tanzania in East Africa, and Mozambique and Zambia in southern Africa—on the basis of geographic region, data on undernourished people, and U.S. Agency for International Development (USAID) programs in-country. We selected countries in east and southern Africa because those regions have high prevalence rates of undernourishment and excluded countries with current conflict. While this selection is not representative in any statistical sense, it ensured that we had variation in the key factors we considered. We do not generalize the results of our fieldwork beyond that selection, using fieldwork primarily to provide illustrative examples. In addition, we reviewed economic literature on the factors that influence food security and recent reports, studies, and papers issued by U.S. agencies, multilateral organizations, and bilateral donors. We reviewed the Rome Declaration on World Food Security and the World Food Summit Plan of Action, which included 7 commitments, 27 objectives, and 181 specific actions. We recognize the multifaceted nature of factors affecting food security, but some of them, such as conflict and trade reforms, were beyond the scope of our study. We reviewed economic studies and recent reports on the factors that influence food security. These included articles from leading authors published in established journals, such as World Development. We also included studies by such organizations as the International Food Policy and Research Institute (IFPRI), FAO, IMF, USDA’s Economic Research Service, World Food Program (WFP), and the World Bank. These sources were chosen because they represent a wide cross section of the discussion on food security and are written by the leading authorities and institutions working in the field. To summarize and organize meaningfully the many factors and interventions that impact and can address global food security, we created a framework. To ensure that the framework was comprehensive and rigorous, we based it on relevant literature and the input of practitioners and experts. Specifically, our first step was to review relevant research on global food security from multilateral institutions and academia and consider key policy documents, such as the Rome Declaration. We presented the first draft of the framework to a panel of nongovernmental organizations (NGO) and government representatives in Washington, D.C., and subsequently used the framework during our panels in the four African countries to help stimulate discussion. We refined the framework on the basis of preliminary analysis of the panel results and finalized it on the basis of the input of a roundtable of food security experts in Washington, D.C. In the four African countries that we selected for fieldwork, we conducted structured discussions with groups of NGOs and donors, organizing them into 9 panels with about 80 participants representing more than 60 entities. To identify the panelists’ views on key recommendations for improvement and lessons learned, we posed the same questions to each of the 9 panels and recorded their answers. Subsequently, we coded their recommendations and lessons according to the factors that were further refined and are shown in figure 3. We also coded some recommendations and lessons according to a few additional topics that occurred with some frequency in the panels but that fell outside the scope of our framework, such as donor coordination and the targeting of U.S. food aid. Two staff members performed the initial coding independently and then met to reconcile any differences in their coding. These lessons and recommendations that we coded represent the most frequently expressed views and perspectives of in-country NGOs, donors, and regional representatives that we met with, and cannot be generalized beyond that population. To examine the extent to which host governments and donors, including the United States, are working toward halving hunger by 2015, we analyzed data on official development assistance (ODA) to developing countries published by the Organization for Economic Cooperation and Development (OECD), Development Assistance Committee (DAC). Specifically, we analyzed the trends in the share of ODA going to agriculture and to emergencies from multilateral and bilateral donors, from 1974 to 2006. The DAC Secretariat assesses the quality of aid activity data each year by verifying both the coverage (completeness) of each donor’s reporting and the conformity of reporting with DAC’s definitions to ensure the comparability of data among donors. These data are widely used by researchers and institutions in studying development assistance resource flows. OECD’s classification of agriculture may underreport funding to agriculture. OECD’s ODA to agriculture excludes rural development and development food aid. For example, the International Fund for Agricultural Development (IFAD) believes that some of its multisectoral lending may not have counted as ODA to agriculture. However, since OECD has consistently used the same classification, we determined that the data are sufficiently reliable for our purpose, which is to track trends over time. To determine whether African governments have fulfilled their pledge to devote 10 percent of their budgets to agriculture, we relied on the government expenditure data provided by IFPRI, which is the same data source on which USAID relies. We determined that these data are sufficiently reliable for the purposes of a broad comparison of countries’ agricultural spending to the Comprehensive Africa Agriculture Development Program (CAADP) targets in the aggregate. IFPRI recognizes that data on government sectoral spending are weak in many developing countries and is working with some of these countries to improve data quality. We also analyzed USAID budget for the Presidential Initiative to End Hunger in Africa (IEHA). We determined that these data are sufficiently reliable for our purposes. The information on foreign law in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. In Washington, D.C., we interviewed officials from U.S. agencies, including USAID, USDA, the Departments of State and the Treasury, and the Millennium Challenge Corporation (MCC). We also met with IFPRI and the World Bank. In New York, we met with UNDP, the Rockefeller Foundation, the Alliance for a Green Revolution in Africa (AGRA), and Columbia University; and in Seattle, Washington, we met with the Bill and Melinda Gates Foundation. In Rome, we met with FAO, WFP, IFAD, and the Consultative Group on International Agricultural Research (CGIAR). We also met with the U.S. Mission to the United Nations in Rome and several bilateral donors’ permanent representatives to the Rome-based UN food and agriculture agencies. In addition, in Washington, D.C., we convened a roundtable of 12 experts and practitioners—including representatives from academia, research organizations, multilateral organizations, NGOs, and others—to further delineate, on the basis of our initial work, some of the factors that have contributed to food insecurity in sub-Saharan Africa and challenges that hamper accelerating progress toward food security. We conducted this performance audit from April 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As a major participant in the 1996 WFS, the United States supported the summit’s goal of halving the number of undernourished people in the world by 2015. During the summit and over the last decade, the U.S. position on global food security has been predicated on a strong belief that the primary responsibility for reducing food insecurity rests with each country, and that it is critical that all countries adopt policies that promote self-reliance and facilitate food security at all levels, including food availability, access, and utilization. U.S. policy as represented at the summit advocated the following national policies and actions to improve food security: Governments should act as facilitators rather than intervenors. National policies that facilitate the development of markets and expand the individual’s freedom of action are the best guarantor of food security. Emphasis is placed on democratic institutions, transparency in government, opposition to graft and corruption, and full participation by the private sector. All countries should work to promote liberalized trade to maximize the potential for economic growth (within the context of sustainable development) and realize the benefits of comparative advantage. Governments should invest in a public goods infrastructure that includes transportation, communication, education, and social safety nets; and governments should provide basic health and sanitary services, maintain basic levels of nutrition, and facilitate the stabilization of vulnerable populations. Governments should ensure a political system that does not discriminate against women. All countries must recognize the essential role of women, who work to produce more than half of the food in developing countries. Governments should establish a general development policy that (1) neither discriminates against agricultural or fisheries sectors nor against rural or coastal areas and (2) recognizes that poverty alleviation requires an integrated approach to rural development. All countries should promote the critical role of sustainable development in agriculture, forestry, and fisheries sectors, and these policies must be environmentally sound. Greater emphasis needs to be placed on agricultural research and extension services. Governments should emphasize investment in agricultural research and technical education. During negotiations on the summit policy statement and Plan of Action, the United States opposed any agreement that supported additional resource pledges by the developed countries or the creation of new financial mechanisms, institutions, or bureaucracies. Although the United States was not prepared to commit increased resources for food security, U.S. government representatives at the summit indicated that the United States intended to play a major role in promoting food security around the world. According to a U.S. position paper, the United States planned to accomplish this objective by enhancing U.S. government support for research and technology development in agriculture and related sectors; employing an integrated approach to sustainable development, with a strong emphasis on those countries that show a good-faith willingness to address policy reforms; continuing support for food security through the use of agriculture programs, development assistance, and food aid; continuing support for international efforts to respond to and prevent humanitarian crises that create a need for emergency food; continuing efforts to encourage and facilitate implementations of food security-related actions adopted at international conferences or agreed-to conventions; working within the multilateral system to enhance global approaches to working with all countries to achieve freer trade and ensure that the benefits are equitably realized, and urging all countries to open their markets in the interest of achieving greater stability and participation in the world market. An interagency governmentwide Working Group on Food Security that was established to prepare for the 1996 summit continued to operate until 2003, issuing two annual reports on a U.S. Food Security Plan of Action in 1999 and 2000. This group was assisted by a Food Security Advisory Committee composed of representatives from the private agribusiness sector, NGOs, and educational institutions. (These groups were disbanded in 2003.) These reports indicated some limited progress in addressing food security, primarily through the use of existing U.S. food aid and limited agricultural development and trade initiatives. The establishment of the African Food Security Initiative in 1998, the Greater Horn of Africa Initiative, the Africa Seeds of Hope Act in 1998, and the African Growth and Opportunity Act of 2000 all reflected some limited U.S. government initiative to improve a deteriorating food security situation in sub-Saharan Africa. This appendix provides greater detail and explains the importance of the factors we used to develop a framework to evaluate findings obtained during the in-country interviews in Kenya, Tanzania, Mozambique, and Zambia and the literature on food security, including the 2008 World Bank Development report and the Rome Declaration. The factors listed in the framework shown in table 2 are areas on which development efforts can be focused. They include such areas as agricultural productivity and development; rural development; governance; and health, education, and social welfare. All of these factors contribute to food security. For example, actions to improve agricultural productivity are most effective in conjunction with rural development, good governance, and good health and welfare. The framework also identifies actions or interventions that can be taken to address these development factors. They include such actions or interventions as increasing access to inputs, improving infrastructure, and strengthening rural communities. Successful agricultural development requires coordination of these interventions across a range of activities. For example, farmers cannot buy inputs unless there are functioning credit institutions. Also, farmers cannot access markets if there are no roads. Given that achieving food security is an extremely difficult and complex process and that there are many different ways in which to categorize these factors, this list should not be construed as exhaustive. Nonetheless, this categorization provides a framework with which to identify the issues on which to base discussion on food security and summarize the range of programs implemented in various African countries. On the basis of a content analysis of the results from our nine structured panel discussions in Kenya, Mozambique, Tanzania, and Zambia, we identified key recommendations for improving food security (see table 3). For example, the first row of this table indicates that all 9 panels mentioned the recommendation to improve marketing, and that the recommendation was mentioned 35 times across all 9 panels. The next several sections of this appendix provides some examples of interventions that governments, research organizations, NGOs, private foundations, and other donors have undertaken to address the factors underlying food insecurity. Our panelists noted that improving markets and farmers’ access to them is key to improving their food security. Well-functioning markets at all levels of the marketing chain, among other things, provide accurate price information, buyer contacts, distribution channels, and buyer and producer trends. They can be facilitated by encouraging private investment and establishing private/public partnerships and developing the capacity of agrobusiness and processing focused on value-added production. As an early action under CAADP, an Alliance for Commodity Trade in East and Southern Africa is being developed to open up national and regional market opportunities for staple foods produced by millions of smallholder farmers. Agribusiness, in particular, has an economic interest in a vibrant agricultural sector. For this reason, USAID supports private agribusiness development in Africa, working directly with about 900 public/private partnerships to build capacity and leverage additional resources in 2006. These include producers, exporters, and their associations, such as the East African Fine Coffees Association, which is linking buyers from companies like Starbucks in the United States with producers and exports of high-value coffee, and the African Cotton and Textile Industry Federation, which is improving the links of African farmers to the U.S. market through the African Growth and Opportunity Act. To facilitate market access in arid and semi-arid areas, USAID’s Famine Fund has been supporting a pastoral livelihood program. Weak rural development contributes to food insecurity throughout sub- Saharan Africa. Agricultural productivity growth requires fostering linkages between the agricultural and nonagricultural sections. Growth in agriculture is more effective if the proper infrastructure is in place, rural communities are strong and effective and financial systems are able to provide credit to producers to buy, among other things, inputs for production. The experts we interviewed noted that efforts to strengthen rural communities and economies are essential to increasing food security. Interventions that help to increase rural farmers’ incomes help to strengthen rural economies. We observed the UN Millennium Villages helping farmers increase their incomes by using the value chain approach to link farmers to markets. For example, in Kenya, a local business called HoneyCare Africa trained farmers in beekeeping. The farmers were financed to start beekeeping, provide honey, and ensure quality control and collection. Beekeepers bring their honey to the company’s collection center where the honey is weighed and is prepared for shipment from Nairobi. After being processed and packaged in a Nairobi facility, HoneyCare Africa products are sold in Kenyan and overseas retail outlets. The program trained 44 farmers, who produced an average of 800 kilograms of honey, generating $1,500 per farmer per year. The focus of U.S. assistance on commodities creates some problems for NGOs and donors that would like to see U.S. Title II assistance better managed. The panelists noted that this food aid can be better managed by targeting those communities that can absorb the commodities that are provided by the United States, so that the commodities do not distort markets. Despite the inherent inefficiency of monetization, there are some examples of the successful use of monetized Title II funding for food security. An external evaluation of IEHA’s use of food aid noted that Title II monetization proceeds have a large realm of possible uses, including financing small business start-ups; paying the costs of training programs; locally purchasing commodities, rather than using imported food in particular situations, where there is a particularly high potential for disincentives for local producers; and providing start-up capital for initiating farmer association-based thrift and savings societies. As we have previously noted, improving infrastructure, such as roads and power, is key to helping rural farmers. Investment in infrastructure links the local economy to broader markets. Infrastructure, particularly roads, is important in making technology available to farmers and is key to getting commodities to markets. Good roads and port facilities reduce the costs of moving products to markets. Telecommunications bring consumers and farmers into contact and transmit market signals on prices helping markets operate efficiently. MCC provides funding to African countries to improve their infrastructure. As of February 2008, MCC had signed 16 compacts totaling $5.5 billion. Nine of the 16 compacts were with African countries, and about 70 percent of MCC compacts ($3.8 billion) funded projects in Africa. This includes two of the four countries that we reviewed—Tanzania and Mozambique. MCC signed a compact with Tanzania in 2008 that will provide $698 million in funding for infrastructure investments in energy, water, and transportation, with the largest portion (about half) dedicated to transportation. In Mozambique, the MCC compact signed in July 2007 will include funds to improve water systems, sanitation, agribusiness, roads, land tenure, and agriculture. In addition, according to State, while the short-term goal of a WFP road- building operation was to facilitate food aid delivery in southern Sudan, it also helped contribute to the long-term food security by reducing the cost of access to food and markets. Sustainable production increases require resource management. Soil fertility, water management, and water use efficiency are important for raising agriculture productivity in a sustainable manner. Natural resource management, particularly water resources, is key to helping farmers maintain productivity, even during times of drought and flood. The Ethiopian government’s Productive Safety Net Program (PSNP) provided food and cash assistance to 7.2 million people in 2006, and includes water resources development projects. In Tigray, Ethiopia, we visited a program focusing on the construction of deep hand-dug wells that provide accessible and safe water for rural communities. An irrigation program also focuses on harvesting methods and irrigation development activities. An IFPRI evaluation of PSNP found that while there were some delays in payments made to beneficiaries, the well construction and soil and water conservation projects were valuable. Increasing access to inputs, such as improved seed and fertilizer, helps farmers boost their productivity, which is essential for food security. A number of research organizations support African agricultural development, including CGIAR, which was established in 1971 to help achieve sustainable worldwide food security by promoting agricultural science and research-related activities. CGIAR has 15 research centers under its umbrella, including IFPRI, the International Livestock Research Institute, and the International Institute for Tropical Agriculture (IITA). IITA and 40 NGO partners, including Catholic Relief Service, worked on a U.S. government-funded $4.5 million, 19-month project in 6 countries called the Crop Crisis Control Project (C3P). Officials from this program said that they have introduced 1,400 varieties of cassava and provided 5,000 farmers with seeds for growing banana trees. In Kenya, beneficiaries of the C3P project, especially women, said that the project has directly led to more profitable cassava growth and increased banana production. In addition, USAID, USDA, and other donors have also been providing direct support to African Research Institutions at both the national and regional levels, promoting collective action on problems that cut across borders, like pests and diseases. In addition to the efforts of host governments, multilateral organizations, and bilateral donors, NGOs and private foundations play an active role in advancing food security in sub-Saharan Africa. Nongovernmental organizations. NGOs or not-for-profit organizations may design and implement development-related projects. They are particularly engaged in community mobilization activities and extension support services. NGOs include community-based self-help groups, research institutes, churches, and professional associations. Examples include implementing partners for USAID and USDA, such as Cooperative for Assistance and Relief Everywhere, Inc.; Catholic Relief Services; and Land O’Lakes International Development. Additional examples also include advocacy groups such as the International Alliance Against Hunger, founded by the Rome-based food and agriculture agencies and international NGOs in 2003 to advocate for the elimination of hunger, malnutrition, and poverty; the National Alliances Against Hunger, including a U.S. alliance, which brings together civil society and governments in developed and developing countries to raise the level of political commitment to end hunger and malnutrition; and the Partnership to Cut Hunger and Poverty in Africa, which is a coalition of U.S. and African organizations formed in 2000 to advocate support for efforts to end hunger and poverty in Africa. Private foundations. A number of philanthropic private organizations, such as the Rockefeller Foundation and the Bill and Melinda Gates Foundation, provide support for African agricultural development. The Gates Foundation recently became one of the largest funding sources for agriculture in Africa, announcing in January 2008 a $306 million package of agricultural development grants to boost the productivity and incomes of farmers in Africa and developing countries in other parts of the world. Among the most prominent efforts funded by philanthropic private organizations is AGRA, headquartered in Nairobi (Kenya) and established in 2007 with an initial grant of $150 million from the Gates Foundation and the Rockefeller Foundation to help small-scale farmers lift themselves out of hunger and poverty through increased farm productivity and incomes. Rising global commodity prices and climate change are emerging challenges that will likely exacerbate food insecurity in sub-Saharan Africa. Rising commodity prices are in part due to the growing global demand for biofuels, and this appendix provides further information on how biofuels impact food security. This appendix also provides further information on how climate change is predicted to affect food security in sub-Saharan Africa, primarily through its impact on agricultural yields. Driven by environmental concerns and the high price of oil, global demand for biofuels is rapidly rising. Total biofuel production has been recently growing at a rate of about 15 percent per year, such that, between 2000 and 2005, production more than doubled to nearly equal 650,000 barrels per day or about 1 percent of global transportation fuel use. In the United States, ethanol production will consume more than one third of the country’s corn crop in 2009, according to USDA. The United States and other key producers of biofuels have pledged to pursue further growth in production. In the Energy Independence and Security Act of 2007, the United States pledged to increase ethanol production nearly five-fold over current levels by 2022. Similarly, the European Commission has announced its intentions to expand biofuel production to 10 percent of its transportation fuel use by 2020. Although potential growth in biofuel production is uncertain, various estimates suggest that global biofuel production could grow to supply over 5 percent of the world’s transportation energy needs. Growth in biofuel demand potentially creates both positive and negative impacts for African agriculture and food security. For example: Rural development opportunities could exist for African communities that are able to produce biofuels. Countries with biofuel production could also qualify for emission-reduction credits through the international market for greenhouse gas emission reductions under the Kyoto Protocol. Such credits would allow these countries to attract additional investment through the Clean Development Mechanism that could assist them in further developing their biofuel industries. However, while several African countries are pursuing biofuel production, commercial production is not yet widely developed and experts suggest that such production risks excluding smallholder farmers. African biofuel production may compete with food production through competition for land, water, and other agricultural inputs. The UN reports concern that commercial biofuel production in sub-Saharan Africa will target high-quality lands and push food production to less productive lands. The World Bank reports that 75 percent of the farmland in sub- Saharan Africa is already characterized by soils that are degraded and lack nutrients. Rapid growth in demand for grains to produce biofuels has contributed to rising agricultural prices. Between 2005 and 2007 alone, world prices of grains rose 43 percent. Biofuel growth has also triggered increases in the prices of other agricultural commodities as the use of land to grow biofuels has decreased land available for other crops. Higher grain prices reduce resources for low-income consumers who spend a large share of their income on food, farmers who buy more food than they produce, and food aid programs. In the long term, while higher grain prices provide incentives to expand agricultural production, complementary policies and investments in technology and market development may be required. On a net basis, IFPRI has concluded that current growth in biofuels will result in an increase in African food insecurity. Using their IMPACT model, IFPRI projects that world prices for maize will rise 26 percent and world prices for oilseeds will rise 18 percent by 2020 under the assumption that current biofuel investment plans are realized. In this case, total net calorie availability in sub-Saharan Africa will decline by about 4 percent. Worldwide, FAO projects a 15 percent net increase in the 2007 grain import bills of developing countries, partly as a result of growing biofuel demand. Concern over the negative impacts of biofuels has also been widely noted by organizations such as FAO; the World Bank; and the UN Special Rapporteur on the Right to Food, who has called for a 5-year moratorium on the production of biofuels. Although global temperatures have varied throughout history, key scientific studies have found that higher temperatures during the past century are largely attributable to human activities, and that, as such, temperatures are likely to rise further during this century. The National Academy of Sciences has found that global temperatures have been warmer during the last few decades of the twentieth century than during any comparable period of the preceding 400 years. These assessments also predict rising global temperatures for this century, resulting in changed precipitation patterns and increased frequency and severity of damaging weather-related events. The Intergovernmental Panel on Climate Change (IPCC), for example, has predicted a rise in global mean temperatures of between 1.8 and 4.0 degrees Celsius, depending upon human and economic behavior. Assuming no fundamental change in that behavior, a comprehensive review of climate change models finds a 77 to 99 percent likelihood that global average temperatures will rise in excess of 2 degrees Celsius. Regarding climates in Africa, key studies also conclude that warming has taken place. For example, according to the IPCC, southern Africa has had higher minimum temperatures and more frequent warm spells since the 1960s, as well as increased interannual precipitation variability since the 1970s. The IPCC also reports that both East Africa and southern Africa have had more intense and widespread droughts. In the future, IFPRI reports that Africa may be the continent hardest hit by climate change, with one estimate predicting temperature increases for certain areas in Africa that are double those of the global average. One climate study predicts future annual warming across the continent ranging from 0.2 to 0.5 degrees Celsius, per decade. Climate is an important factor affecting agricultural productivity and experts report that Africa’s agricultural sector is particularly sensitive to climate change due, in part, to low adaptive capacity. Experts find that climate change will likely significantly limit agricultural production in sub- Saharan Africa in various ways: Higher temperatures shorten the growing season and adversely affect grain formation at night. As a result of climate change, FAO states that the quantity of African land with a growing season of less than 120 days could increase by 5 to 8 percent and the World Resources Institute describes projected future declines in the length of the growing season by 50 to 113 days in certain areas in Africa. Reduced precipitation limits the availability of water to grow crops. The World Wildlife Fund reports that water constraints have already reduced agricultural productivity, as 95 percent of cropland in sub-Saharan Africa is used for low-input, rain-fed agriculture rather than for irrigated production. Models referenced by the United Nations Framework on Climate Change (UNFCC) estimate that more than an additional 600,000 square kilometers of agricultural land in sub-Saharan Africa will become severely water-constrained with global climate change. Variable climates lead farmers to shift agricultural production sites, often onto marginal lands, exacerbating soil erosion. According to the World Bank’s 2008 World Development Report, soil erosion can result in agricultural productivity losses for the east African highlands of 2 to 3 percent a year. Rising sea levels threaten coastal agricultural land. In its national communication to the UNFCC, for example, Kenya predicted losses of more than $470 million for damage to crops from a 1-meter rise in sea levels. Climate extremes aggravate crop diseases and result in crop failures and livestock deaths. FAO reports that both floods and droughts have increased the incidence of food emergencies in sub-Saharan Africa. To quantify expected climate change impacts on African agricultural production and food security, a number of studies employ climate models that estimate changes in temperature, precipitation, and agricultural yields. Results vary widely due to the large degree of uncertainty entailed in climate modeling, as well as differences in assumptions about adaptive capacity. Despite the wide variation in results, these studies generally conclude that climate change will increase African food insecurity in both the short and long term. For example, one study predicts that agricultural revenues in Kenya could decline between 27 and 34 percent by 2030. FAO reports a projected increase in the number of Africans at risk of hunger from 116 million in 1980 to 415 million in 2060. To illustrate potential food security impacts from climate change, results from several studies are shown in table 4. (The full citation of the sources in table 4 follow the table.) Agoumi, Ali. Vulnerability of North African Countries to Climatic Changes: Adaptation and Implementation Strategies for Climate Change. International Institute for Sustainable Development, 2003. Arnell, N.W, M.G.R. Cannell, M. Hulme, R.S. Kovats, J.F.B. Mitchell, R.J. Nicholls, M.L. Parry, M.T.J. Livermore, and A. White. “The Consequences of COMaddison, David, Marita Manley, and Pradeep Kurukulasuriya. The Impact of Climate Change on African Agriculture: A Ricardian Approach. CEEPA Discussion Paper No. 15, Centre for Environmental Economics and Policy in Africa, University of Pretoria, July 2006. Tubiello, Francesco N. and Günther Fischer. “Reducing Climate Change Impacts on Agriculture: Global and Regional Effects of Mitigation, 2000- 2080.” Technological Forecasting and Social Change, vol. 74, 2007. United Nations Environment Programme. African Regional Implementation Review for the 14th Session of the Commission on Sustainable Development: Report on Climate Change. Nairobi, Kenya, 2006. Warren, Rachel, Nigel Arnell, Robert Nicholls, Peter Levy, and Jeff Price. Understanding the Regional Impacts of Climate Change: Research Report Prepared for the Stern Review on the Economics of Climate Change. Tyndall Center for Climate Change Research Working Paper 90, September 2006. Following are GAO’s comments on the U.S. Agency for International Development letter dated May 16, 2008. 1. Although some African countries have had robust economic growth in recent years, to achieve the WFS and MDG-1 goals, the growth, especially in agriculture, needs to be sustained. As we note in our report, concerted efforts and sustained growth are needed for many years to overcome the numerous challenges facing host governments and donors to halve hunger in sub-Saharan Africa by 2015. 2. While GAO recognizes the various ongoing coordination efforts at the international and U.S. government level, our work revealed that coordination on improving food security in sub-Saharan Africa has thus far been insufficient. In May 2008, following the release of a draft of this report, USAID initiated the creation of a sub-Principals Coordinating Committee on Food Price Increases and Global Food Security to help facilitate interagency coordination. In addition to USAID, USDA, State, and Treasury, participating agencies include the Central Intelligence Agency, the Department of Commerce, MCC, the National Security Council, the Office of Management and Budget, the Peace Corps, the U.S. Trade and Development Agency, and the U.S. Trade Representative. 3. As we note in our report, while IEHA has undertaken a variety of efforts to address food insecurity in Africa, these efforts have thus far been limited in scale and scope. IEHA does not integrate with other agencies in terms of plans, programs, resources, and activities. In addition, many IEHA projects are limited in their impact because they may not necessarily address the root causes of food insecurity. For example, projects distributing treadle pumps benefit only the farmers who receive them, but do not address the larger issue of the underdevelopment of agricultural input markets. 4. While we recognize that clean water and sanitation are important to nutrition and food utilization, these issues were outside the scope of our study. 5. We recognize the importance of emergency assistance. However, to break the cycle of poverty, food insecurity, and emergencies, agricultural development needs to increase in priority. We agree with USAID that a shift in focus from relief to development should not translate into reduced emergency food aid in the short term. 6. We disagree with USAID’s comment that a report such as the annual progress report on IEHA (which is not congressionally required), instead of the congressionally required International Food Assistance Report (IFAR), be used to report on USAID’s and other agencies’ implementation of our first recommendation. Public Law 480, section 407 (f)(codified at 7.U.S.C. 1736a(f) requires that the President prepare an annual report that “shall include…an assessment of the progress toward achieving food security in each country receiving food assistance from the United States Government.” Expanding the scope of current reporting to include progress on achieving food security would enhance the usefulness of IFAR, while making it unnecessary to recommend the promulgation of a separate report. Following is GAO’s comment on the U.S. Department of Agriculture letter dated May 14, 2008. 1. We acknowledge the role that USDA plays in meeting short- and long- term food needs in sub-Saharan Africa. Although an interagency Food Assistance Policy Council provides a forum for the discussion and coordination of U.S. food aid programs, a similar forum to address food security issues had not been established until May 2008 after the issuance of a draft of this report. Finally, although USDA administers food assistance programs, including food aid programs for development, we note in this report that these are not included in IEHA. Following are GAO’s comments on the Department of State letter dated May 16, 2008. 1. We maintain that U.S. agencies’ efforts to coordinate food security programs have thus far been insufficient. Efforts to date are focused primarily on food aid, as opposed to food security, and there is no comprehensive U.S. governmentwide strategy for addressing food insecurity in sub-Saharan Africa. 2. A major reason for food spoilage and poor market delivery is poor infrastructure, as we note in our discussion of rural development. 3. As we note in our discussion of our objectives, scope, and methodology (see app. I), although we recognize the multifaceted nature of factors affecting food security, we excluded some factors, such as international trade, from the scope of our study. While international trade is important to global food security, its relative importance to sub-Saharan Africa is considerably lower. Many smallholder farmers in sub-Saharan Africa are not in a position to benefit from international trade due to high transaction costs, and they generally produce products, such as cassava, that are not traded internationally. 4. We did not generate data from FAO’s original estimates of undernourishment. We relied on FAO’s estimates to assess progress toward the WFS and MDG goals. As we note in our previously mentioned objectives, scope, and methodology, we discussed the reliability of FAO’s undernourishment estimates with cognizant FAO and U.S. government officials in Washington and in sub-Saharan Africa, and we determined that these estimates are sufficiently reliable for our purpose, which is to show overall trends over time at the aggregate level. 5. FAO’s estimates are the official indicators used to track progress toward the WFS and MDG-1 goals. In addition, they are the only estimates available to assess undernourishment at the global level. Other UN agencies, such as WFP, conduct assessments and collect other data on food supply and nutrition for their respective missions. However, they do not do so at the global level, and their data cannot replace FAO’s estimates on undernourishment to track long-term progress toward the WFS and MDG-1 goals. 6. We added language in appendix IV to reflect the recent experiences in southern Sudan. 7. As we previously mentioned in our objectives, scope, and methodology, although we recognize the multifaceted nature of factors affecting food security, some factors, such as conflicts, were excluded from the scope of our study. We disagree with State’s assertion that we did not adequately address host government issues. Our report points out that host government policy disincentives are a main factor in food insecurity. We also note that the lack of the sufficient investment in agriculture by the host government is one of the challenges hindering progress to halving hunger by 2015. 8. In May 2008, the President announced a $770 million initiative that aims to (1) increase food assistance to meet the immediate needs of the most vulnerable ($620 million); (2) augment agricultural productivity programs, especially in Africa and other key agricultural regions, to boost food staple supplies ($150 million); and (3) promote an international policy environment that addresses the systemic causes of the food crisis. However, as of the time of this report, Congress had not passed legislation implementing this proposal. In addition to the person named above, Phillip J. Thomas (Assistant Director), Carol Bray, Ming Chen, Debbie Chung, Martin De Alteriis, Leah DeWolf, Mark Dowling, Etana Finkler, Melinda Hudson, Joy Labez, Julia A. Roberts, Kendall Schaefer, and Elizabeth Singer made key contributions to this report. Somalia: Several Challenges Limit U.S. and International Stabilization, Humanitarian, and Development Efforts. GAO-08-351. Washington, D.C.: February 19, 2008. The Democratic Republic of the Congo: Systematic Assessment Is Needed to Determine Agencies’ Progress Toward U.S. Policy Objectives. GAO-08-188. Washington, D.C.: December 14, 2007. Foreign Assistance: Various Challenges Limit the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-905T. Washington, D.C.: May 24, 2007. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 13, 2007. Foreign Assistance: U.S. Agencies Face Challenges to Improving the Efficiency and Effectiveness of Food Aid. GAO-07-616T. Washington, D.C.: March 21, 2007. Darfur Crisis: Progress in Aid and Peace Monitoring Threatened by Ongoing Violence and Operational Challenges. GAO-07-9. Washington, D.C.: November 9, 2006. Foreign Assistance: Lack of Strategic Focus and Obstacles to Agricultural Recovery Threaten Afghanistan’s Stability. GAO-03-607. Washington, D.C.: June 30, 2003. Foreign Assistance: Sustained Efforts Needed to Help Southern Africa Recover from Food Crisis. GAO-03-644. Washington, D.C.: June 25, 2003. Food Aid: Experience of U.S. Programs Suggest Opportunities for Improvement. GAO-02-801T. Washington, D.C.: June 4, 2002. Foreign Assistance: Global Food for Education Initiative Faces Challenges for Successful Implementation. GAO-02-328. Washington, D.C.: February 28, 2002. Foreign Assistance: U.S. Bilateral Food Assistance to North Korea Had Mixed Results. GAO/NSIAD-00-175. Washington, D.C.: June 15, 2000. Foreign Assistance: Donation of U.S. Planting Seed to Russia in 1999 Had Weaknesses. GAO/NSIAD-00-91. Washington, D.C.: March 9, 2000. Food Security: Factors That Could Affect Progress toward Meeting World Food Summit Goals. GAO/NSIAD-99-15. Washington, D.C.: March 22, 1999. Food Security: Preparations for the 1996 World Food Summit. GAO/NSIAD-97-44. Washington, D.C.: November 7, 1996. International Relations: Food Security in Africa. GAO-T-NSIAD-96-217. Washington, D.C.: July 31, 1996.
|
In 1996, the United States and more than 180 world leaders pledged to halve the number of undernourished people globally by 2015 from the 1990 level. The global number has not decreased significantly--remaining at about 850 million in 2001-2003--and the number in sub-Saharan Africa has increased from about 170 million in 1990-1992 to over 200 million in 2001-2003. On the basis of analyses of U.S. and international agency documents, structured panel discussions with experts and practitioners, and fieldwork in four African countries, GAO was asked to examine (1) factors that contribute to persistent food insecurity in sub-Saharan Africa and (2) the extent to which host governments and donors, including the United States, are working toward halving hunger in the region by 2015. Chronic undernourishment (food insecurity) in sub-Saharan Africa persists primarily due to low agricultural productivity, limited rural development, government policy disincentives, and the impact of poor health on the agricultural workforce. Additional factors, including rising global commodity prices and climate change, will likely further exacerbate food insecurity in the region. Agricultural productivity in sub-Saharan Africa, as measured by grain yield, is only about 40 percent of that of the rest of the world's developing countries, and the gap has widened over the years. Low agricultural productivity in sub-Saharan Africa is due, in part, to the limited use of agricultural inputs, such as fertilizer and improved seed varieties, and the lack of modern farming practices. The efforts of host governments and donors, including the United States, to achieve the goal of halving hunger in sub-Saharan Africa by 2015 have thus far been insufficient. First, some host governments have not prioritized food security as a development goal, and, according to a 2008 report of the International Food Policy Research Institute, as of 2005, only a few countries had fulfilled a 2003 pledge to direct 10 percent of government spending to agriculture. Second, donors have reduced the priority given to agriculture, and their efforts have been further hampered by difficulties in coordination and deficiencies in measuring and monitoring progress. Third, limited agricultural development resources and a fragmented approach have impaired U.S. efforts to reduce hunger in Africa. The U.S. Agency for International Development (USAID) funding to address food insecurity in Africa has been primarily for emergency food aid, which has been crucial in helping to alleviate food crises but has not addressed the underlying factors that contributed to the recurrence and severity of these crises. Also, the United States' principal strategy for meeting its commitment to halve hunger in Africa is limited to some of USAID's agricultural development activities and does not integrate other U.S. agencies' agricultural development assistance to the region.
|
Federal employees are routinely surveyed through OPM’s administration of the FEVS, which is administered to collect data on federal employees’ perceptions about how effectively agencies are managing their workforces. The FEVS is a tool that measures employees’ perceptions of whether, and to what extent, conditions that characterize successful organizations are present in their agencies, according to OPM. This survey was administered for the first time in 2002 and then repeated in 2004, 2006, 2008, 2010, 2011, and April through June 2012. The survey provides general indicators of how well the federal government is managing its human resources management systems. It also serves as a tool for OPM to assess individual agencies and their progress on strategic management of human capital, and gives senior managers employee perspectives on agency management. Specifically, the survey includes categories of questions asking employees for their perspectives on their work experience, work unit, agency, supervisor, leadership, and satisfaction. OPM intends for agency managers to use the findings to develop policies and action plans for improving agency performance. In 2011, OPM provided a summary of FEVS findings to DHS. In that report, OPM summarized DHS’s survey results relative to governmentwide averages and provided positive and negative response levels for each survey question. Also included in the report was action planning guidance for using FEVS results to improve human capital management. Pub. L. No. 107-295, § 1304, 116 Stat. 2315, 2289 (2002) (codified at 5 U.S.C. § 1103(c)). management, which is focused on agencies having quality people with the appropriate competencies in mission-critical activities. The FEVS job satisfaction index is one of the metrics used by OPM to assess whether agencies are effectively managing the talent management system. The FEVS provides one source of information for evaluating success on other HCAAF standards as well by measuring responses to groups of FEVS questions for four indices. The four index measures are: Leadership and Knowledge Management; Results-Oriented Performance Culture; Talent Management; and Job Satisfaction. In addition, in 2011, OPM added an index to measure employee engagement, which OPM defines as the extent to which an employee is immersed in the content of the job and energized to spend extra effort in job performance. DHS’s OCHCO is responsible for implementing policies and programs to recruit, hire, train and retain DHS’s workforce. As the department-wide unit responsible for human capital issues within DHS, OCHCO provides OPM with a DHS-wide action plan every other year, with the next plan due in January 2013. OCHCO also provides guidance and oversight to the DHS components related to morale issues. For example, OCHCO provides a survey analysis and action planning tool that the components must use in response to FEVS results to develop action plans for These plans are to state improving employees’ positive scores.objectives and identify actions to be taken in response to survey results. OCHCO also has provided oversight by reviewing and providing feedback on component action plans. Data from the 2011 FEVS show that DHS employees have lower average levels of job satisfaction and engagement overall and across most demographic groups available for comparison, such as pay grade, when compared with the average for the rest of the federal government. Levels of satisfaction and engagement vary across components, with some components reporting satisfaction or engagement above the average for the rest of the government. Similarly, these measures of morale vary within components as well, with some employee groups reporting higher morale than other groups within the same component. As shown in figure 1, DHS employees generally reported improvements in job satisfaction index levels since 2006 that narrowed the gap between DHS and the governmentwide average. However, employees continue to indicate less satisfaction than the governmentwide average. Partnership analysis of FEVS data also indicates consistent levels of low employee satisfaction relative to other federal agencies. Similar to its 2011 ranking, 31st of 33 federal agencies, the Partnership ranked DHS 28th of 32 in 2010, 28th of 30 in 2009, and 29th of 30 in 2007 in the Best Places to Work ranking on overall scores for employee satisfaction and commitment. Our analyses of 2011 FEVS results also indicate that average DHS-wide employee satisfaction and engagement scores were consistently lower when compared with average non-DHS employee scores in the same demographic groups. As shown in figure 2, comparisons of DHS with non-DHS employees by supervisory status, pay group, and tenure indicate that satisfaction and engagement are lower across many of the For DHS groups where statistically significant differences are evident.example, across pay categories DHS satisfaction and engagement were lower than the scores for the same non-DHS employee pay groups, with the exception of senior executives, senior leaders, employees with less than 1 year of tenure, and General Schedule pay grades 1-6.job satisfaction and engagement scores for DHS management and non- management employees were lower than for the same non-DHS employee groups. DHS and the selected components have taken steps to understand morale problems, such as holding focus groups, implementing an exit survey, and routinely analyzing FEVS results. On the basis of FEVS results, DHS and the selected components planned actions to improve FEVS scores. However, we found that DHS could enhance its survey analysis and monitoring of action plan results. In addition, according to DHS’s Integrated Strategy for addressing the implementing and transforming high risk area, DHS has begun implementing activities to address morale but has not yet improved DHS’s scores on OPM’s job satisfaction index or its ranking on the Partnership’s Best Places to Work in the Federal Government. DHS’s OCHCO has taken several steps to understand morale problems DHS-wide. Specifically, since 2007, OCHCO: Conducted focus groups DHS-wide in 2007 to determine employee concerns related to morale, which identified employee concerns in areas of leadership, communication, empowerment, and resources. Performed statistical analysis in 2008 to identify workplace factors that drove employee job satisfaction, finding that the DHS mission and supervisor support, among other things, drove employee job satisfaction. Initiated an exit survey, first administered DHS-wide in 2011, to understand why employees chose to leave their position. The survey found lack of quality supervision and advancement opportunities were the top reasons for leaving. Analyzed 2011 FEVS results, among other things, showing where lower scores on HCAAF indices were concentrated among several components—Intelligence and Analysis, TSA, ICE, National Protection and Programs Directorate, and the Federal Emergency Management Agency (FEMA). Launched an Employee Engagement Executive Steering Committee (EEESC) in January 2012 that will identify action items for improving employee engagement by September 2012, according to OCHCO officials. The selected components also evaluated FEVS results to identify morale problems and considered additional information sources. For example: TSA convened a corporate action planning team in March 2011, as part of its response to FEVS results, which relied on data sources such as the TSA-administered exit survey, employee advisory groups, and an online employee suggestion tool, to gain perspectives on systemic challenge areas and to develop plans to address morale, according to TSA officials. TSA’s action plan for improving morale, based on these sources, was completed in July 2012. ICE considered results of a Federal Organizational Climate Survey (FOCS), last completed in March 2012, and held focus groups to gauge the extent to which employees view ICE as having an organizational culture that promotes diversity. CBP launched a quarterly online employee survey in 2009 to solicit opinions on one specific topic per quarter, such as use of career development resources and how the resources contributed to employees’ professional growth at CBP. The Coast Guard relied on an Organizational Assessment Survey (OAS), last administered by OPM in 2010, to understand employee morale. The OAS solicits opinions on a range of topics, including job satisfaction, leadership, training, innovation, and use of resources. It included civilian and military Coast Guard personnel, but is not administered governmentwide so comparisons between the Coast Guard and other federal employees are limited to organizations that may use the OAS, according to Coast Guard officials. Appendix III provides more detailed descriptions of DHS’s steps to address morale problems and selected components’ 2011 FEVS analysis methods and findings. Appendix IV provides additional information on the selected components’ data sources beyond FEVS for evaluating root causes of morale, including a summary of results and how the information was used by the components. For the 2011 FEVS, DHS and the selected components completed varying levels of analyses to determine the root causes of low morale. However, DHS and the selected components conducted limited analysis in several areas that is not consistent with OPM and Partnership guidance that lays out useful factors for evaluating root causes of morale problems through FEVS analysis, as shown in figure 4. Usage of the three factors described in figure 4 varied across DHS-wide and component-level 2011 FEVS analyses we reviewed. In some instances, the factors were partially or not used. For example: Demographic group comparisons. According to our reviews of OCHCO’s analyses, OCHCO’s DHS-wide analyses did not include evaluations of demographic group differences on morale-related issues for the 2011 FEVS. According to OCHCO officials, DHS’s Office of Civil Rights and Civil Liberties reviews survey results to identify diversity issues that may be reflected in the survey, and OCHCO officials considered these results when developing one of the current (as of August 2012) DHS action plans to create policies that identify barriers to diversity. In 2007 and 2009, years in which DHS administered the Annual Employee Survey (AES), demographic comparisons were made. For example, on the basis of 2009 AES data, DHS found no significant demographic differences other than supervisors’ positive responses to questions were generally higher than those of non-supervisors and differences among pay grade levels. Because OPM now administers the survey each year, DHS is not able to make significant demographic group comparisons because of the format of the data provided by OPM, according to OCHCO officials. However, we obtained FEVS data from OPM that allowed us to make demographic group comparisons. For example, we compared DHS and non-DHS employee satisfaction and engagement scores across available demographic groups and found that both satisfaction and engagement were generally lower for DHS employees, which is summarized in appendix I, table 5. For the DHS component analyses we reviewed, TSA and CBP conducted some demographic analysis. For example, TSA compared screeners, Federal Security Director staff, Federal Air Marshals, and headquarters staff on each FEVS dimension (e.g., work experiences, supervisor/leader, satisfaction, and work/life). As a result, TSA was able to identify screeners as having survey scores below those of other TSA employee groups. CBP also compared race, ethnicity, gender, and program office scores. CBP found that no significant differences were present in the positive responses to the 2011 FEVS core questions when comparing race, ethnicity and gender, and found that Border Patrol employees reported higher job satisfaction than field operations employees (74 versus 66 percent on the job satisfaction index). In contrast, the Coast Guard did not conduct analysis in addition to data that was provided by DHS OCHCO. Because OCHCO’s data did not include demographic information for the 2011 FEVS, Coast Guard did not make demographic group comparisons. ICE and CBP officials stated that they did not have access to 2011 FEVS data files necessary to conduct more detailed demographic comparisons. However, as shown in appendix I, we were able to make various demographic comparisons based on a more detailed data file provided by OPM, which is similar to a file that OPM makes available to agencies and the public. Benchmarking against similar organizations. TSA benchmarked its FEVS results against results from similar organizations, by comparing results with CBP, and OCHCO’s DHS-wide analysis highlighted Partnership rankings data, showing DHS’s position relative to the positions of other federal agencies as a Best Place to Work. Similarly, ICE benchmarked its FEVS results overall and for program offices, such as homeland security investigators, against other DHS components, including the U.S. Secret Service and CBP. For the 2011 FEVS, CBP performed more limited benchmarking, by comparing FEVS results with governmentwide averages. According to CBP officials, when analyzing annual employee surveys prior to 2011, CBP benchmarked its results against agencies with high positive FEVS scores, such as the Social Security Administration, the Federal Bureau of Investigation, the Internal Revenue Service, and the Nuclear Regulatory Commission. CBP is in the initial planning phase of a larger benchmarking project that would benchmark CBP against foreign immigration, customs, and agriculture inspection agencies, such as the Canadian Border Services Agency and the Australian Customs and Border Protection Service. If approved, this benchmarking project is expected to occur in fiscal year 2013, according to CBP officials. The Coast Guard did not perform FEVS benchmarking analysis, according to the documentation we reviewed, but did make OAS-based comparisons between the Coast Guard and other organizations that use the OAS, according to Coast Guard officials. Linkage of root causes with action plans. For both DHS-wide and selected component action plans, FEVS questions with low scores were linked with action plan areas. For example, in the DHS-wide action plan, low scores on employee satisfaction with opportunities to get a better job in the organization were linked to action plan items for enhancing employee retention. However, the extent to which DHS and the components used root causes found through other analyses to inform their action plans, such as quarterly exit survey results or additional internal component surveys, was not evident in action plan documentation (see appendix IV for a description of these additional root cause analyses). For example, OCHCO’s DHS-wide action plan was last updated based on 2010 FEVS data and therefore did not rely on data from the DHS 2011 exit survey, since those results were not published until January 2012. Similarly, the EEESC was launched in January 2012 and therefore its efforts are not yet documented in DHS-wide action planning documents. According to OCHCO officials, the 2010 DHS-wide action plan includes consideration of results from OCHCO’s 2008 statistical analysis identifying key drivers of job satisfaction and results from the 2007 focus groups. However, linkage to items in the DHS-wide action plan to these results is not clearly identified because a new action plan template OPM introduced in 2010 did not provide an area to identify the linkage between each action and the driver, according to OCHCO officials. In addition, DHS’s September 2009 action plan indicates consideration of the 2008 key driver analysis and 2007 focus group effort that led to a focus on leadership effectiveness initiatives. According to CBP and TSA officials, data from other root cause analysis efforts are not explicitly documented in action plans developed in response to FEVS results because DHS has not included linkage of other root cause analysis efforts to actions items in the FEVS action planning templates used by the components. TSA officials also stated that other root cause efforts (see appendix IV) were used to develop TSA’s July 2012 action plan update. However, the July 2012 plan did not include linkage of root cause findings other than FEVS results, such as exit survey results, to action plan items. ICE officials stated that results from other root cause efforts, such as its FOCS, have not yet been considered in FEVS-based action planning but that ICE plans to do so in future efforts to address morale. The Coast Guard uses information from its OAS as part of a process separate from FEVS-based action planning for addressing morale, so OAS results are not linked to FEVS-based action plans. OCHCO and component human capital officials described several reasons for the variation in root cause analysis of FEVS results. OCHCO officials described resource constraints and leadership changes within the OCHCO position as resulting in a lack of continuity in root cause analysis efforts. For example, one OCHCO official stated that because of resource constraints, OCHCO has focused more efforts on workforce planning than on morale problem analysis since 2009. ICE human capital officials stated that ICE’s human capital services were provided via a contract with CBP until 2010, when the human capital function became an independently funded part of the ICE organization. Only since moving to its current position within ICE has the human capital office been able to devote more resources to addressing morale issues, according to the officials. CBP human capital officials stated that for assessing morale issues, CBP uses both quantitative and qualitative information. However, according to the officials, qualitative evidence is preferable over quantitative survey analysis because focus groups and open-ended surveys, such as the Most Valuable Perspective online survey, allow CBP to better understand the issues affecting employees. Because of CBP human capital officials’ preference for qualitative information, CBP has not emphasized extensive quantitative analysis of survey results, such as statistical analysis that may determine underlying causes of morale problems. Without a complete understanding of which issues are driving low employee morale, DHS risks not being able to effectively address the underlying concerns of its varied employee population. Emphasis on survey analysis that includes demographic group comparisons, benchmarking against similar organizations, and linkage of other analysis efforts outside of FEVS within action plan documentation could assist DHS in better addressing its employee morale problems. DHS and the selected components routinely update their action plans to address employee survey results in accordance with the Office of Management and Budget’s budget guidance; the DHS-wide plan is updated every two years, and components update their plans at least annually. According to OPM’s guide for using FEVS results, action planning involves, among other things, identifying goals and actions for improving low-scoring FEVS satisfaction topics such as reviewing survey results to determine steps to be taken to improve how the agency manages its workforce. DHS-wide and component action plan goals and examples of low-scoring FEVS satisfaction topics are listed in table 2. As part of DHS’s efforts to address our high-risk designation of implementing and transforming DHS, DHS described a plan for improving employee morale in its Integrated Strategy for High Risk Management (Integrated Strategy). In June 2012, DHS provided us with its updated Integrated Strategy, which summarized the status of the department’s activities for addressing its implementation and transformation high-risk designation. In the Integrated Strategy, DHS identified activities to improve employee job satisfaction scores, among other things. The status of the activities included ongoing analysis of the 2011 FEVS results, launch of the EEESC to address DHS scores on the HCAAF indexes, ongoing coordination between the OCHCO and components to develop action plans in response to the 2011 FEVS results, and launch of an online employee survey in the first quarter of fiscal year 2013. Within the Integrated Strategy action plan for improving job satisfaction scores, DHS reported that three of six efforts were hindered by a lack of For example, resources are a constraining factor for DHS’s resources.Office of the Chief Human Capital Officer to consult with components in developing action plans in response to 2011 FEVS results. Similarly, resources are a constraining factor to deploy online focus discussions on job satisfaction-related issues. According to our review of the action plans created in response to the FEVS and interviews with agency officials, DHS and the selected components generally incorporated the six action planning steps suggested by OPM, but the agency does not have effective metrics to support its efforts related to monitoring. (See figure 5.) We found that, in general, DHS and its components are implementing the six steps for action planning as demonstrated in table 3 below. Three attributes relevant to the linkage—determines whether there is a relationship between the performance measure and the goals; clarity—determines whether the performance measures are clearly stated; and measurable target—determines whether, performance measures have quantifiable, numerical targets or other measurable values, where appropriate. In general, DHS and component measures satisfied the linkage attribute but did not address the clarity and measurable targets attributes. We compared DHS and the four components measures of success to the three attributes and found that all 54 measures of success incorporated the linkage attribute, 12 of the 54 measures of success did not address the clarity attribute, and 29 of the 54 measures of success did not address the measurable target attribute. As shown in table 4 below, we found that these measures demonstrate linkage because they align with the action plan goals. However, we determined that the measures demonstrate neither clarity nor a measurable target. Specifically, the measures do not demonstrate clarity because they do not provide enough detail to clearly state the metric used to measure success. They also do not demonstrate a measureable target because they do not list quantitative goals or provide a qualitative predictor of a desired outcome, which would allow the agency to better determine the extent to which they were making progress toward achieving their goals. Officials provided several reasons why their measures of success may fall short of the attributes for successful metrics. According to OCHCO officials, OCHCO considers accomplishment of an action item step as a success and relies on the measures of success listed in its action plan as a metric for whether the action plan items were implemented. OCHCO considers whether positive responses to survey questions noted in the action plan improve over time as the outcome measure for whether action plans are effective. However, as part of its oversight and feedback on component action plans, OCHCO does not monitor or evaluate measures of success for action planning and therefore is not in a position to determine whether the measures reflect improvement. CBP officials stated that they monitor the change in FEVS results overall as the intent of the action planning is to improve their scores on the HCAAF indexes. Coast Guard officials stated that they rely on qualitative feedback from employees on action plan items, such as improved training and website updates, to measure action plan performance. TSA officials stated they assess action plan results by tracking completion dates for action items and updating OCHCO on results at least semi-annually, and ICE officials have stated they have not yet fully developed monitoring efforts to evaluate job satisfaction action planning because the human capital office received funding in the summer of 2011 to implement human capital programs. We acknowledge that positive responses in survey results and positive employee feedback are good indicators that action planning is working. However, until DHS and its components begin to see positive results, it is important for them to (1) understand whether they are successfully implementing the individual steps of their action plans and (2) make any necessary changes to improve on them. By not having specific metrics within the action plans that are clear and measurable, it will be more difficult for DHS to assess its efforts to address employee morale problems, as well as determine if changes should be made to ensure progress toward achieving its goals. Furthermore, effective measures are key to DHS’s action plan as it is part of a process that informs the Office of Management and Budget and OPM of DHS efforts to address survey results. According to an OPM official responsible for federal action planning to improve morale, DHS should carefully consider, for each action step, what success means to the agency, such as increased employee engagement targets. The official said that when success is defined, it should not only be clear and measurable, but should also take into account as many of the different demographic groups evaluated as possible. DHS and the selected components have initiated efforts to determine how other entities approach employee morale issues. DHS officials stated they have started to review and implement what they consider to be best practices for improving employee morale, such as the following: DHS working group—OCHCO leads a survey engagement team that holds monthly meetings during which action planning efforts from across the different components are shared and discussed. Representatives from other federal agencies such as the National Aeronautics and Space Administration and the Federal Aviation Administration have also attended these meetings and presented their action plans for addressing survey results. Idea Factory—a TSA web-based tool adopted by DHS that empowers employees to develop, rate, and improve innovative ideas for programs, processes, and technologies. According to a DHS assessment, the Under Secretary for Management plans to use this tool for internal DHS employee communication so as to promote greater job satisfaction and enhance organization effectiveness. Component officials we interviewed also stated they have started to review, implement, and share what they consider to be best practices for improving morale. For example: ICE officials stated they consult with other agencies and DHS components, such as the U.S. Marshal’s Service, when addressing morale challenges and developing policies and programs. For example, the U.S. Marshal’s Service has a critical incident response program for employees encountering a traumatic event and ICE is exploring adopting a similar program. TSA officials stated that they reached out to Marriott Corporation, CBP, and the National Aeronautics and Space Administration to identify actions for increasing employee rewards and employee confidence in leadership. CBP officials stated they have established several ongoing working groups that routinely meet and share human capital best practices within the agency. One of these working groups has conducted benchmarking work with high-FEVS-scoring federal agencies such as the Social Security Administration, the U.S. Secret Service, the Federal Bureau of Investigation, the Internal Revenue Service and the Nuclear Regulatory Commission. Coast Guard officials stated they share human capital best practices that may improve job satisfaction with other DHS components such as (1) their performance appraisal system which was adopted, in part, DHS-wide; (2) their automated cash award process with FEMA; and (3) Coast Guard training to supervisors with both DHS headquarters officials and FEMA. Given the critical nature of DHS’s mission to protect the security and economy of our nation, it is important that DHS employees are satisfied with their jobs so that DHS can retain and attract the talent required to complete its work. Employee survey data indicate that when compared to other federal employees, many DHS employees report being dissatisfied and not engaged with their jobs. It is imperative that DHS understand what is driving employee morale problems and address those problems through targeted actions that address employees’ underlying concerns. DHS has made efforts to understand morale issues across the department, but those efforts could be improved. Specifically, given the annual employee survey data available through the FEVS, DHS and its components could improve their efforts to determine root causes of morale problems by comparing demographic groups, benchmarking against similar organizations, and linking root cause findings to action plans. Uncovering root causes of morale problems could help identify appropriate actions to take in efforts to improve morale. In addition, DHS has established performance measures for its action plans to improve morale, but incorporating attributes such as improved clarity and measurable targets could better position DHS to determine whether its action plans are effective. Without doing so, DHS will have a more difficult time determining whether it is achieving its goals. To strengthen DHS’s evaluation and planning process for addressing employee morale, we recommend that the Secretary of Homeland Security direct OCHCO and component human capital officials to take the following two actions: examine their root cause analysis efforts and, where absent, add the following: comparisons of demographic groups, benchmarking against similar organizations, and linkage of root cause findings to action plans; and establish metrics of success within the action plans that are clear and measurable. We requested comments on a draft of this report from DHS. On September 25, 2012, DHS provided written comments, which are reprinted in appendix V, and provided technical comments, which we incorporated as appropriate. DHS concurred with our two recommendations and described actions planned to address them. Specifically: DHS stated that it will ensure that department-wide and component action plans are tied to root causes and that the department will conduct benchmarking against other organizations. DHS also stated that its ability to conduct demographic analysis is limited due to the data set OPM makes available to federal agencies. However, according to OPM, DHS has access to the data necessary for conducting analysis similar to our comparison of demographic groups. DHS stated it will review action plans to ensure that each action is clear and measurable. We also requested comments on a draft of this report from OPM. On September 18, 2012, OPM provided a written response, which is reprinted in appendix VI. OPM’s letter indicated that it reviewed the draft report and had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, the U.S. Office of Personnel Management, and interested congressional committees. The report also will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. We conducted a statistical analysis of the 2011 Federal Employee Viewpoint Survey (FEVS) to assess employee morale at the Department of Homeland Security (DHS). Our analysis addressed two specific questions. First, how does morale at DHS and its components compare with morale at other agencies, holding constant demographic differences among employees? Second, to what extent is the morale gap between DHS and other agencies explained by differences in the demographic composition of the DHS workforce versus other unique characteristics of the agency or unmeasured demographic factors? This appendix explains the value of statistical analysis for understanding the employee morale gap, describes the data and methods we used, and provides additional details about our findings, which are summarized in the body of the report. In sum DHS employees with the same demographic profiles (measured by FEVS) were about 7 percentage points less engaged and 6 points less satisfied than non-DHS employees. Demographic differences (measured by FEVS) between DHS and other agencies are unlikely to explain the overall morale gap. Unique features of DHS (or unmeasured demographics) are more likely to be responsible. DHS middle managers and employees with 1 to 10 years of tenure at their components—those hired after the department’s creation—have lower morale than similar employees at other departments. Morale varies widely across DHS components, and some have similar morale as non-DHS agencies. Individual offices can strongly influence the morale gap at the component level. The morale gap is smaller for DHS components that existed before the department was created. The morale gap between DHS and other agencies may be due to unique issues within DHS or common issues faced by all agencies in similar circumstances. Unique issues might include developing an agency-wide culture, the decisions and composition of senior leaders, and the inherent uniqueness of homeland security programs. Common characteristics might include having many law enforcement and front-line customer service occupations, and having employees dispersed among many headquarters and field offices. Determining whether unique or shared issues account for the overall morale gap is important for understanding the cause of the problem. If morale at DHS was not uniquely low, compared with morale at agencies with similar demographics and programs, the agency might learn from peer agencies facing similar challenges. Alternatively, if morale was lower at DHS for reasons unique to the agency, DHS might put more emphasis on understanding its own particular challenges. Distinguishing among these possible explanations can help develop a solution that is narrowly tailored to the problem. Our analysis focused on one group of shared circumstances that might explain the morale gap: employee demographics. If DHS were more likely to employ the types of workers who tend to have lower morale across all agencies of the government, the composition of the workforce might account for the gap to a greater extent than factors specific to DHS. In other words, morale at DHS may be no worse than at other agencies among demographically equivalent employees. Our analysis focused on a limited number of demographic differences, such as location and age, but attitudinal differences about pay, benefits, supervision, training, mentoring, and other human capital issues could be assessed in a similar way. We also considered how large of a morale gap there was between employees in various DHS components and work groups and non-DHS employees. The gap at the department level can mask groups of employees with higher or lower morale. Disaggregating morale into small work groups identifies areas of DHS in which morale may be high or low, and thus provides sufficiently detailed data for focused solutions to the problem. Any analysis of morale in employee surveys is limited by the fact that associations among the variables of interest may not represent cause- and-effect relationships. Nevertheless, a limited observational analysis remains useful for evaluating human capital programs. Since federal agencies cannot easily conduct high-quality randomized controlled trials of various approaches to managing their employees, the use of observational methods is common, often in the form of quantitative survey analyses or qualitative interviews and focus groups. We have previously found that a pragmatic approach to answering necessary policy questions, using the best methods and data that are feasible, is widely supported by academic experts and practitioners in policy analysis. Moreover, statistical theory has shown that observational methods can estimate cause-and-effect relationships in certain conditions. Associations between morale and demographic characteristics are useful for understanding the operation of human capital programs, when interpreted cautiously and in the context of all the available evidence. Our analysis here describes patterns across the demographic groups identified in the 2011 FEVS and determines whether the aggregate differences between DHS and other agencies persists among demographically similar employees. We make no causal interpretations of these relationships, and our approach is only one that might be valid and useful. The Office of Personnel Management (OPM) provided us with a version of the 2011 FEVS that included more detailed demographic and organizational data than the file it released to the public. Specifically, our file contained the same variables as the public file but identified more detailed groups of employees. The 2011 survey included responses from 266,376 full-time, permanent federal employees, working for agencies that, according to OPM, constituted 97 percent of the executive branch workforce. OPM sampled employees within strata formed by supervisory status and organizational subgroup (e.g., component and work group).This produced generally large sample sizes even for many small work groups within components, which allowed us to analyze morale among small groups of employees with an acceptable degree of precision. We focused on two types of variables in the FEVS: (1) employee demographics and (2) OPM’s Employee Engagement and Job Satisfaction indexes. A series of questions at the end of the survey collected the demographic data, rather than preexisting administrative records. OPM reported independently developing and validating the engagement indexes using factor-analytic procedures, which are common psychometric statistical methods. The survey items that made up each index used five-point, Likert-type scales, with “agree/disagree,” “satisfied/dissatisfied,” or “good/poor” response options. We used weights provided by OPM to calculate estimates and sampling variances for all analyses. The weights were the product of the unequal sampling probabilities across strata and non-response and post- stratification adjustments. Because some strata had relatively small population sizes—one-quarter with 18 employees or fewer—we corrected for finite populations. One explanation for lower morale at DHS is that its employees could be members of demographic groups that typically have lower morale across all agencies. If this is true, the cause of morale problems and their solutions might focus less on factors that are unique to DHS and more on approaches that apply to any agency with a similar workforce. Table 5 provides basic evidence to help assess the demographic explanation. The table presents the average OPM Engagement Index for several demographic groups in the 2011 FEVS. If engagement problems at DHS were isolated to particular subgroups of employees, the morale gap should vary widely across those subgroups. In fact, engagement at DHS is lower (or statistically indistinguishable from zero) than at other agencies in each demographic subgroup we analyzed, and the gap relative to DHS does not vary by large amounts across most subgroups. However, the gap is somewhat larger among employees who were in certain subgroups, such as those who had 4 to 10 years of experience with their components and who worked outside of headquarters. We developed several statistical models to further assess the demographic explanation. These models held constant the demographic profiles of DHS and non-DHS employees, in order to isolate the portion of the morale gap that was specifically due to non-demographic factors. The models allowed us to compare morale at DHS and other agencies among employees who were in the same demographic groups, as measured by the FEVS. To avoid methodological complications with modeling latent variables, we created a binary measure that identified whether a respondent was engaged or satisfied on each item in the respective scales. Our measure equaled 1 if the respondent gave positive answers (4 or 5) to each item in the index and 0 if the respondent gave neutral or negative responses (1,2, or 3) to at least one item. Collapsing the scale loses some information, since morale and satisfaction are continuous, latent variables. However, a collapsed measure provides some degree of comparability between OPM’s aggregate indices and our individual-level analysis, since the OPM’s indices also collapse the scale. The differences among agencies and subgroups of employees are generally similar using either our measure or OPM’s. We focused on the associations between broad measures of morale and fixed demographic characteristics available in the 2011 FEVS. Fixed demographics and broad measures of satisfaction are not subject to artificially high correlations that a survey’s design can produce among attitudinal measures. ) (1) ) (2) Moraleij indicates whether employee i at agency j was engaged or satisfied, using the binary measure we calculated from the survey items that make up the OPM indexes (see above). DHS indicates whether the employee worked for DHS, Demogij is a vector of demographic indicators (listed in table 6), Λ is the logistic function, and α and β are vectors of coefficients that estimate how morale varied among employees in different demographic groups. We included all demographic factors measured by the FEVS that plausibly could have predicted morale and were clearly causally prior to morale. We excluded pay group, however, because of its high correlation with supervisory status. Model 2 allows DHS and non-DHS employees in the same demographic groups to have different levels of morale, as described by Dβ and Gβ .We estimated each model using cluster-robust maximum likelihood methods, with 365 agency clusters (e.g., Transportation Security Administration ). Our multivariate analysis found that DHS employees remained an average of 6.4 percentage points less engaged (+/- 3.2) (see table 6) and 5.5 points less satisfied (+/- 2.2) (not shown) on our scales than employees at other agencies who had the same age, office location, race, sex, supervisory status, and tenure. This suggests that measured demographic differences between employees at DHS and other agencies do not fully explain the morale gap. Instead, factors that are intrinsic to DHS, such as culture or management practices, or demographic factors not measured by FEVS, such as education or occupation, are likely to be responsible. We can further explore the roles of demographics and unique DHS characteristics by performing an Oaxaca decomposition of the results of model 2, in order to compare DHS with other agencies. Oaxaca decomposition can assess whether the overall morale gap is explained by the demographic characteristics of DHS employees, or whether it is explained by lower morale among DHS employees in the same demographic groups. In other words, does DHS employ an unusually large number of workers who tend to have low morale across all agencies, or do workers with the same backgrounds have uniquely lower morale at DHS? As shown in table 6, the model suggests that the demographic profile of DHS employees (measured by FEVS) tends to slightly increase their engagement and reduce the gap compared with employees at other agencies. The demographic characteristics we can observe in FEVS reduce the overall gaps in the proportion engaged and satisfied on our scales by 0.1 and 1.0 percentage points, respectively. Instead, the morale gap is better explained by unique differences in morale between DHS and other agencies among demographically similar employees. Such intrinsic differences increase the gaps in the proportion engaged and satisfied by 6.4 and 5.5 percentage points, respectively. If the demographic profile of the DHS workforce did not change, but DHS could achieve the same levels of morale as other agencies from the same types of employees, our model predicts that DHS employees would not have lower morale than employees at other agencies. DHS employees with lower-level positions and component tenure were among those with lower morale, relative to employees in other agencies. As shown in figures 6 and 7, our measures of engagement and satisfaction generally increased with seniority and decreased with tenure, among employees at DHS and other agencies. At DHS, however, morale increased more slowly as employees gained more seniority, and it declined more quickly as they spent more time at the agency. For example, the average newly hired employee at DHS and similar employees at other agencies had statistically indistinguishable levels of engagement. By their sixth years, however, satisfaction for the DHS employee declined to an average of 18 percentage points, whereas satisfaction for the non-DHS employees declined to an average of only 26 percentage points. A similar pattern exists with respect to supervisory status (see figures 6 and 7). These patterns are particularly important for explaining the overall morale gap, because DHS had about 30 percent more supervisors and about twice as many people with 6 to 10 years of component tenure (as a share of all employees), compared with people at other agencies (according to FEVS). Low employee morale is not a uniform problem throughout DHS. As shown in table 7, engagement varies widely across components within the department, with employees in some components not being significantly different from the average employee at non-DHS agencies. These components include the U.S. Coast Guard (Coast Guard), Federal Law Enforcement Training Center (FLETC), Management Directorate (MGMT), and U.S. Secret Service (USSS). Job satisfaction at these components also matches or exceeds that found at other agencies (not shown in table 7). DHS has a number of components whose employees have substantially lower morale than employees at other agencies and elsewhere in the department. The large share of DHS employees working in these components accounts for the overall morale gap between DHS and other agencies. Components with lower morale include Federal Emergency Management Agency (FEMA), Immigration and Customs Enforcement (ICE), Intelligence and Analysis (IA), National Protection and Programs Directorate (NPPD), Science and Technology (ST), and the TSA. The engagement scores of these components range from 9.1 to 13.9 percentage points lower than the average score for non-DHS agencies (see table 7). As a group, these components make up 46 percent of the employees interviewed for the FEVS. Consequently, the components with substantially lower morale have a large influence on the gap relative to the rest of the government, despite the fact that morale at many smaller DHS components is no worse. Morale at some of the less engaged and satisfied components is, in turn, strongly influenced by particular employee workgroups (see table 7). For example, the average engagement at TSA is 12.8 percentage points (apart from rounding) lower than at non-DHS agencies. Within TSA, however, the collectively large groups of air marshal, law enforcement, and screening workers account for much of the overall difference. A similar pattern applies to the enforcement, removal, and homeland security investigation staffs at ICE, the field operations staff at CBP, and the Federal Protective Service. Such variation within components further suggests that the morale gap is isolated to particular areas within DHS that account for a large proportion of its workforce. At other components, morale is more uniformly lower across most offices. Average engagement at all work groups within FEMA is 5.8 to 17.7 percentage points lower than the non-DHS average, with the exception of two regional offices and the offices of the Administrator and Chief of Staff. The components of ST and IA also have more consistently low morale across work groups. One explanation for why morale varies across components focuses on the length of time each organization has existed. Components that existed prior to the creation of DHS may have had more time to develop successful cultures and management practices than components that policymakers created with the department in 2003. As a result, the preexisting components may have better morale today than components with less mature cultures and practices. To assess this explanation, we analyzed morale among two groups of components, divided according to whether the component was established with the creation of DHS or existed previously (see table 8). We considered three components to be preexisting—FLETC, USSS, and the Coast Guard—and the rest to be newly created. Because TSA was created about 2 years before DHS, we included it with components that were created with DHS. Our analysis shows that employees at the more recently created components were less engaged and satisfied on average than employees at the preexisting components and at non-DHS agencies. For the preexisting components, engagement was about 2.2 percentage points higher than at the rest of the government, and the difference in satisfaction was small (less than 1.4 percentage points). In contrast, engagement and satisfaction at the more recently created components were about 8 and 5.1 percentage points lower than at the rest of the government, respectively. We developed a statistical model to confirm whether the differences among components persist, holding constant demographic differences among their employees. In an alternative version of model 1 above, we replaced DHS with a vector of variables indicating whether the employee worked for DHS components or at an agency other than DHS. All other parts of the model were identical. The model estimates generally confirmed the differences in engagement between non-DHS and DHS component employees in the raw data (see table 9), with two exceptions. The model estimated that, holding constant demographic differences, employees in the Management Directorate and Office of the Secretary were 6.9 and 7.7 percentage points less engaged on average than employees in non-DHS agencies. This suggests that the engagement gap for employees in these offices is more similar to the gap at other offices, holding constant the demographic differences among offices measured by FEVS. The model estimated that differences in satisfaction between the components and non-DHS agencies were generally similar to such differences in engagement (see table 9). The fact that differences among components remained, even among demographically equivalent employees, suggests that either unmeasured demographic variables or intrinsic characteristics of the components are responsible for the differences in morale. Our analysis discussed in this appendix has a narrow scope: assessing whether demographic differences among employees explain the morale differences across DHS and non-DHS employees. Consequently, DHS or others could expand and improve upon our findings. Future work could examine whether attitudinal differences among employees at DHS and other agencies explain the overall morale gap, in addition to demographic differences. The 2011 FEVS measures employee attitudes about pay, benefits, health and safety hazards, training, supervisors, and other issues that could vary meaningfully between employees at DHS and other agencies and, therefore, explain why DHS has lower morale. One might include these factors in a decomposition similar to the one we performed in this appendix. This could further assess how factors unique to DHS and factors that are common across all agencies explain the overall morale gap. A broader attitudinal analysis likely would require the use of more sophisticated statistical methods for estimating the values of and relationships among latent variables. The broad measures of morale we analyze in this appendix, such as the OPM Employee Engagement index, are made up of responses to questions on smaller dimensions, such as leadership and supervision. To avoid simply replicating the correlations that were used to create the indexes, latent variable models could be useful to examine the relationships among these concepts and compare morale on latent scales between DHS and non-DHS agencies. This was beyond the scope of our work. The objectives for this report were to evaluate (1) how DHS employee morale compares with that of other federal government employees and (2) to what extent DHS and its selected components determined the root causes of employee morale and developed action plans to improve morale. To address our objectives, we evaluated both DHS-wide efforts and efforts at four selected components to address employee morale—CBP, ICE, TSA, and the Coast Guard. We selected the four DHS components based on their workforce size and how their 2011 job satisfaction and engagement index scores compare with the non-DHS average. The components selected had scores both above, below, and similar to the average: TSA—below average on both indexes, constituting 25 percent of the DHS workforce; ICE—below average on both indexes, accounting for 9 percent of the DHS workforce; CBP—at the non-DHS average for satisfaction and below on engagement, representing 27 percent of the DHS workforce; and the civilian portion of the Coast Guard—at the non- DHS average for satisfaction and above on engagement, composing 4 percent of the DHS workforce. Together these components represent 65 percent of DHS’s workforce. To evaluate how DHS’s employee morale compares with that of other federal government employees, we analyzed employee responses to the 2011 FEVS. We determined that the 2011 FEVS data were reliable for the purposes of our report, based on interviews with OPM staff, review and analysis of technical documentation of its design and administration, and electronic testing. We used two measures created by OPM—the employee job satisfaction and engagement indexes—to describe morale across the federal government and within DHS. We calculated these measures for various demographic groups, DHS components, and work groups, in order to compare morale at DHS and other agencies among employees who were demographically similar, in part using statistical models. Appendix I describes our methods and findings in more detail. In addition, we interviewed employee groups about morale to identify examples of what issues may drive high and low morale within DHS. We selected the employee groups based on the size of the employee group within each selected component, ensuring we met with employees from employee groups that composed significant proportions of FEVS respondents, such as screeners from TSA (61 percent of TSA respondents) and homeland security investigators from ICE (33 percent of ICE respondents). The comments received from these interviews are not generalizable to entire groups of component employees, but provide insights into the differing issues that can drive morale. To determine the extent to which DHS and the selected components identified the root causes of employee morale and developed action plans for improvements, we reviewed analysis results, interviewed agency human capital officials and representatives of employee groups, and evaluated action plans for improving morale. To identify criteria for determining effective root cause analysis using survey data, we reviewed both OPM and Partnership for Public Service guidance for action planning based on annual employee survey results. On the basis of these guidance documents, we identified factors that should be considered in employee survey analysis that attempts to understand morale problems, such as use of demographic group comparisons, benchmarking results against results at similar organizations, and the linking results of root cause analyses to action planning efforts. We evaluated documents summarizing DHS-wide and selected component root cause analyses of the 2011 FEVS to determine whether the factors we identified were included in the analyses. In addition, we interviewed DHS officials who conducted the analyses in order to fully understand root cause analysis efforts. To identify criteria for determining agency action plans we reviewed OPM guidance for using FEVS results and previous GAO work indicating agencies’ success in measuring performance. On the basis of these guidance documents, we identified OPM’s six steps that should be considered in developing action plans and identified three attributes that were relevant for measuring action plan performance—linkage, clarity, and measurable target. We compared the action plans with these criteria to determine whether these items were included in the action plans. In addition, we interviewed DHS and component officials to identify efforts to leverage best practices for improving morale. We conducted this performance audit from October 2011 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since 2007 DHS’s Office of the Chief Human Capital Officer (OCHCO) has completed several efforts to determine root causes of morale DHS- wide. Focus groups. In 2007 OCHCO conducted focus groups to determine employee concerns related to employee morale. DHS’s focus group effort probed for insights into four areas—(1) leadership, (2) communication, (3) empowerment, and (4) resources—and highlighted concerns raised by focus group participants in each of those areas. For example, within the leadership area, OCHCO’s focus group analysis found that the Customs and Immigration reorganization was a topic discussed by many of the U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), and Citizenship and Immigration Services (CIS) personnel, especially what they felt was a lack of mission understanding on the part of their managers. According to the analysis, non-supervisory participants expressed dissatisfaction with the combination of three types of inspection functions to present “one face at the border.” One Face at the Border For operations at ports of entry, in September 2003 CBP issued its plan for consolidating the inspection functions formerly performed by separate inspectors from the three legacy agencies—customs inspectors from U.S. Customs, immigration inspectors and Border Patrol from the former Immigration and Naturalization Service, and the agriculture border inspectors from the Department of Agriculture’s Animal and Plant Health Inspection Service. The plan, referred to as “One Face at the Border,” called for unifying and integrating the legacy inspectors into two new positions—a CBP officer and a CBP agricultural specialist. The new CBP officer would serve as the frontline officer responsible for carrying out the priority anti-terrorism mission as well as the traditional customs and immigration inspection functions while also identifying and referring goods in need of a more extensive agricultural inspection to the agricultural specialist. CBP anticipated that having a well-trained and well-integrated workforce that could carry out the complete range of inspection functions involving the processing of individuals and goods would allow it to utilize its inspection resources more effectively and enable it to better target potentially high-risk travelers. Together, CBP envisioned the result to be more effective inspections and enhanced security at ports of entry while also accelerating the processing of legitimate trade and travel. Focus group results were distributed to DHS components for consideration in action planning efforts, according to OCHCO officials. CBP, CIS, TSA, the Federal Emergency Management Agency (FEMA), and the Federal Law Enforcement Training Center each addressed at least one of the focus group results relating to leadership, communication, empowerment, or resources in subsequent action plans, according to OCHCO officials. Statistical analysis. In 2008 OCHCO performed statistical analysis of Federal Employee Viewpoint Survey (FEVS) data, beyond examining high- and low-scoring questions, in an effort to determine what workplace factors drove employee job satisfaction. Specifically, the analysis involved isolating which sets of FEVS questions most affect employee job satisfaction. The analysis found that five work areas identified in FEVS questions drive employee job satisfaction: (1) performance and rewards, (2) supervisor support, (3) physical conditions and safety, (4) senior leadership effectiveness, and (5) the DHS mission. According to OCHCO officials, DHS components were encouraged to conduct follow-up discussions at the lowest possible organizational level based on component survey scores in each of the five work areas. However, OCHCO officials stated that they are not aware of any results of this effort because OCHCO did not track or follow-up with the components on the effect of key driver discussions that may have occurred. In addition, increased emphasis on supervisor performance management training was also implemented as a result of the analysis, according to OCHCO officials. Exit survey. In 2011, DHS began administering an exit survey to understand why employees choose to leave their DHS positions. Specifically, according to OCHCO officials, the DHS exit survey was designed to determine where departing employees were moving both inside and outside of DHS, to identify barriers related to diversity, to identify reasons that veterans may be leaving DHS, and to capture feedback from interns. The 2011 exit survey found, among other things, that 27 percent of departing employees who responded to the exit survey were staying within DHS or moving to a different position, and an additional 12 percent of respondents were retiring. Lack of quality supervision and advancement opportunities were the top reasons responding employees indicated for leaving their positions. Exit survey results are shared with DHS components on a quarterly and annual basis. 2011 FEVS analysis. For the 2011 FEVS, DHS’s OCHCO evaluated the results by comparing Human Capital Assessment and Accountability Framework (HCAAF) index results by component. The analysis showed where the lowest index scores were concentrated. As shown in figure 8, lower scores across the indexes were concentrated among several components, including Intelligence and Analysis, Transportation and Security Administration (TSA), ICE, National Protection and Programs Directorate, and FEMA. The analysis also determined how DHS’s scores on the four indexes trended over time and compared with governmentwide averages. As shown in figure 9, DHS-wide scores have generally trended upward over time, but continue to lag behind governmentwide averages for each index. Employee Engagement Executive Steering Committee (EEESC). In January 2012 the DHS Secretary directed all component heads to take steps to improve employee engagement through launch of the EEESC. According to OCHCO officials, the EEESC was launched in response to congressional concerns about DHS employee morale and the Partnership for Public Service results showing DHS’s low placement on the list of Best Places to Work. The EEESC is charged with serving as the DHS corporate body responsible for identifying DHS-wide initiatives to improve employee engagement, oversee the efforts of each DHS component to address employee engagement, and provide periodic reports to the Under Secretary for Management, Deputy Secretary, and Secretary on DHS-wide efforts to improve employee morale and engagement. Specifically, the Secretary made the following directives to component heads: develop and assume responsibility for employee engagement improvement plans, identify and assign specific responsibilities for improved employee engagement to component senior executive performance objectives, identify and assign a senior accountable official to serve on the EEESC, conduct town hall meetings with employees, attend a Labor-Management Forum meeting, and provide monthly reports on actions planned and progress made to the Office of the Chief Human Capital Officer. As of August 2012, each of the Secretary’s directives were completed, with the exception of assigning responsibilities for improved employee engagement to Senior Executive performance objectives, which DHS plans to implement in October 2012 as part of the next senior executive performance period. The EEESC met in February 2012, and component representatives shared their latest action plans and discussed issues of joint concern. In preparation for the 2012 FEVS, the EEESC released a memorandum from the Secretary describing the responsibilities of the EEESC, highlighting department actions, and encouraging employee participation in the FEVS, which began in April 2012. The EEESC also agreed that a corresponding message should be released from component heads outlining specific component actions taken in response to past survey results and encouraging participation in the next survey. In an April 2012 EEESC meeting, the Partnership for Public Service provided a briefing describing the Best Places to Work in the Federal Government rankings and best practices across the government for improving morale scores. The EEESC members also discussed methods for improving the response rates for the upcoming survey and engaged in an action planning exercise designed to help identify actions for department-wide deployment, according to OCHCO officials. As of August 2012, EEESC action items were in development and had not been finalized. According to OCHCO officials, the EEESC plans to decide on action items by September 2012, but a projected date for full implementation has yet to be established because the actions have not been decided upon. In addition to the DHS-wide efforts, the components we selected for review—ICE, TSA, the U.S. Coast Guard (Coast Guard), and CBP— conducted varying levels of analyses regarding the root causes of morale issues to inform agency action planning efforts. The selected components each analyzed FEVS data to understand leading issues that may relate to morale, but the results indicated where job satisfaction problem areas may exist and do not identify the causes of dissatisfaction within employee groups. A discussion of the four selected components’ 2011 FEVS analysis and results are described below. TSA. In its analysis of the 2011 FEVS, TSA focused on areas of concern across groups, such as pay and performance appraisal concerns, and also looked for insight on which employee groups within TSA may be more dissatisfied with their jobs than others by comparing employee group scores on satisfaction-related questions. TSA compared its results with CBP results, as well as against DHS and governmentwide results. When comparing CBP and TSA scores, TSA found that the greatest differences in scores were on questions related to satisfaction with pay and whether performance appraisals were a fair reflection of performance. TSA scored 40 percentage points lower on pay satisfaction and 25 percentage points lower on performance appraisal satisfaction. In comparing TSA results with DHS and governmentwide results, TSA found that TSA was below the averages for all FEVS dimensions. TSA also evaluated FEVS results across employee groups by comparing dimension scores for headquarters staff, the Federal Air Marshals, Federal Security Director staff, and the screening workforce. TSA found that the screening workforce scored at or below scores for all other groups across all of the dimensions. ICE. In its analysis of the 2011 FEVS, ICE analyzed the results by identifying ICE’s FEVS questions with the top positive and negative responses. ICE found that its top strength was employees’ willingness to put in the extra effort to get a job done. ICE’s top negative result was employees’ perceptions that pay raises did not depend on how well employees perform their jobs. ICE also sorted the primary low-scoring results into action planning themes, such as leadership, empowerment, and work-life balance. ICE found, among other things, that employee views on the fairness of its performance appraisals were above DHS’s average but that views on employee preparation for potential security threats were lower. When comparing ICE’s results with average governmentwide figures, ICE found, among other things, that ICE was lower on all of the HCAAF indexes, including job satisfaction. According to ICE human capital officials, future root cause analysis plans for the 2012 FEVS are to benchmark FEVS scores with those of similar law enforcement agencies such as the Drug Enforcement Agency; Federal Bureau of Investigation; Federal Law Enforcement Training Center; United States Secret Service; Alcohol, Tobacco and Firearms, and the U.S. Marshals. CBP. In its analysis of the 2011 FEVS, CBP focused its analysis on trends since 2006. For example, the analysis showed that CBP increased its scores by 5 or more percentage points for 36 of the 39 core FEVS questions. CBP highlighted its greatest increases in HCAAF areas, such as results-oriented performance, which showed a 21 percent improvement over 2006 responses to the question—my performance appraisal is a fair reflection of my performance. The analysis also identified areas in greatest need of improvement, which showed progress since 2006 but continued low scores, such as questions on dealing with poor performers who cannot or will not improve (28 percent positive), promotions based on merit (28 percent positive) and differences in performance are recognized (34 percent positive). Coast Guard. In its review of high and low 2011 FEVS responses, the Coast Guard identified employee responses to two questions that warranted action planning items—(1) How satisfied are you with the information you receive from management on what’s going on in your organization (53 percent positive) and (2) My training needs are assessed (51 percent positive).additional FEVS analyses that were used to inform action planning. Appendix IV: Selected Components’ Data Sources for Evaluating Morale, Other than the Federal Employee Viewpoint Survey Purpose Identify why employees leave the agency and where they are going. Summary of results and how used The number of exit survey respondents from ICE was too low to identify any results and have not been used to address morale as of June 2012, according to ICE officials. Last conducted in March 2012, the FOCS is a data-gathering tool for addressing the extent to which employees perceive their organizational culture as one that incorporates mutual respect, acceptance, teamwork, and productivity among individuals who are diverse in the dimensions of human differences. Additionally, ICE conducts focus groups and individual one-on-one interview sessions to obtain clarifying information pertaining to the FOCS results and written comments. The survey showed low employee perceptions of ICE as an organization where people trust and care for each other, relative to the federal average, according to ICE officials. The results from the FOCS and feedback from the focus groups and individual one-on-one interview sessions are provided to ICE program offices with recommended strategies to improve the program office’s organizational climate. Conducted in 2007, focus groups were launched in response to the 2006 annual employee survey results, which showed CBP below DHS and governmentwide averages. The focus groups identified employees’ perceived problems in specific work environment areas, such as leaders lacking supervisory or communication skills. Among other things, the issues identified by focus group participants allowed CBP to develop action plans that addressed these issues, according to CBP officials. Most Valuable Perspective online survey (MVP) Launched in 2009, this survey was implemented to solicit employee opinions on one topic per quarter as a mechanism for gathering further insights on FEVS results. The MVP was implemented as a continuation of the CBP focus groups completed in 2007. In the July 2012 MVP, which solicited employee preferences for future CBP webcasts to employees, employees suggested retirement planning and financial management as their top two preferences. CBP’s action plan planning process in response to FEVS results includes consideration of MVP results, according to CBP officials. Data source U.S. Office of Personnel Management Organizational Assessment Survey (OAS) Purpose Beginning in 2002, in order to provide the granularity, detail, and reliability needed to ensure the best organizational value, the Coast Guard adopted the OAS as its primary personnel attitude survey, according to Coast Guard officials. The OAS is administered to military (active and reserve) and civilian personnel biennially. Summary of results and how used OPM’s report to the Coast Guard on the 2010 OAS results identified seven strong organizational areas (diversity, teamwork, work environment, leadership and quality, communication, employee involvement and supervision) and three areas for improvement (innovation, use of resources, and rewards/recognition). Coast Guard unit commanders and headquarters program managers use the OAS to support overall Coast Guard improvement. This improvement is achieved by feeding results of the OAS to Coast Guard Unit Commanders and Program Managers who then use OAS results in conjunction with other information as part of routine unit and program leadership and management. Identify why employees leave the agency, launched in 2005. Top reasons for leaving overall were personal reasons, career advancement, management, schedule, and pay. Each quarterly report includes actions managers should take to reduce turnover. A real-time reporting system is also available for each airport and office within TSA so managers can gain access to their results and use them to reduce turnover and make improvements, according to DHS officials. Results from the exit survey were also used by TSA officials in updating TSA’s action plan, according to TSA officials. However, the July 2012 action plan did not link exit survey findings to action items. An online tool for gathering employee suggestions for agency improvement. Each week, approximately 4,000 TSA employees log on to rate, comment, or search, or to submit ideas of their own. The Idea Factory team reviews all submissions and uses Idea Factory challenges to implement solutions to issues. Results were not available for our evaluation. Results were not available for our evaluation. Purpose Provides informal problem resolution services with the mission of promoting fair and equitable treatment in matters involving TSA, according to TSA officials. The Ombudsman assists customers by identifying options, making referrals, explaining policies and procedures, coaching individuals on how to constructively deal with problems, facilitating dialogue, and mediating disputes. Summary of results and how used Results were not available for our evaluation. Each airport and TSA headquarters has an employee advisory council made up of elected members who work on understanding and addressing a variety of workplace issues. Results were not available for our evaluation. In addition to the contact named above, Dawn Locke (Assistant Director), Sandra Burrell (Assistant Director), Lydia Araya, Ben Atwater, Tracey King, Kirsten Lauber, Jean Orland, Jessica Orr, and Jeff Tessin made key contributions to this report.
|
DHS is the third largest cabinet-level department in the federal government, employing more than 200,000 staff in a broad range of jobs. Since it began operations in 2003, DHS employees have reported having low job satisfaction. DHS employee concerns about job satisfaction are one example of the challenges the department faces implementing its missions. GAO has designated the implementation and transformation of DHS as a high risk area, including its management of human capital, because it represents an enormous and complex undertaking that will require time to achieve in an effective and efficient manner. GAO was asked to examine: (1) how DHS's employee morale compared with that of other federal employees, and (2) the extent to which DHS and selected components have determined the root causes of employee morale, and developed action plans to improve morale. To address these objectives, GAO analyzed survey evaluations, focus group reports, and DHS and component action planning documents, and interviewed officials from DHS and four components, selected based on workforce size, among other things. Department of Homeland Security (DHS) employees reported having lower average morale than the average for the rest of the federal government, but morale varied across components and employee groups within the department. Data from the 2011 Office of Personnel Management (OPM) Federal Employee Viewpoint Survey (FEVS)--a tool that measures employees' perceptions of whether and to what extent conditions characterizing successful organizations are present in their agencies--showed that DHS employees had 4.5 percentage points lower job satisfaction and 7.0 percentage points lower engagement in their work overall. Engagement is the extent to which employees are immersed in their work and spending extra effort on job performance. Moreover, within most demographic groups available for comparison, DHS employees scored lower on average satisfaction and engagement than the average for the rest of the federal government. For example, within most pay categories DHS employees reported lower satisfaction and engagement than non-DHS employees in the same pay groups. Levels of satisfaction and engagement varied across components, with some components reporting scores above the non-DHS averages. Several components with lower morale, such as Transportation Security Administration (TSA) and Immigration and Customs Enforcement (ICE), made up a substantial share of FEVS respondents at DHS, and accounted for a significant portion of the overall difference between the department and other agencies. In addition, components that were created with the department or shortly thereafter tended to have lower morale than components that previously existed. Job satisfaction and engagement varied within components as well. For example, employees in TSA's Federal Security Director staff reported higher satisfaction (by 13 percentage points) and engagement (by 14 percentage points) than TSA's airport security screeners. DHS has taken steps to determine the root causes of employee morale problems and implemented corrective actions, but it could strengthen its survey analyses and metrics for action plan success. To understand morale problems, DHS and selected components took steps, such as implementing an exit survey and routinely analyzing FEVS results. Components GAO selected for review--ICE, TSA, the Coast Guard, and Customs and Border Protection--conducted varying levels of analyses regarding the root causes of morale to understand leading issues that may relate to morale. DHS and the selected components planned actions to improve FEVS scores based on analyses of survey results, but GAO found that these efforts could be enhanced. Specifically, 2011 DHS-wide survey analyses did not include evaluations of demographic group differences on morale-related issues, the Coast Guard did not perform benchmarking analyses, and it was not evident from documentation the extent to which DHS and its components used root cause analyses in their action planning. Without these elements, DHS risks not being able to address the underlying concerns of its varied employee population. In addition, GAO found that despite having broad performance metrics in place to track and assess DHS employee morale on an agency-wide level, DHS does not have specific metrics within the action plans that are consistently clear and measurable. As a result, DHS's ability to assess its efforts to address employee morale problems and determine if changes should be made to ensure progress toward achieving its goals is limited. GAO recommends that DHS examine its root cause analysis efforts and add the following, where absent: comparisons of demographic groups, benchmarking, and linkage of root cause findings to action plans; and establish clear and measurable metrics of action plan success. DHS concurred with our recommendations.
|
The FTR, issued by GSA, implements statutory and OMB requirements and policies for most federal civilian employees and others authorized to travel at government expense. The purpose of the FTR is to ensure that official travel is conducted responsibly and at minimal administrative expense. Unless exempt by specific legislation, executive agencies, wholly owned government corporations, and independent establishments are expected to follow the FTR, including its promulgation related to premium class travel. DOD’s uniformed servicemembers and State employees exempt from the FTR are covered by their agencies’ travel regulations. OMB’s general policy related to travel is that the taxpayers should pay no more than necessary to transport government officials. Consistent with this principle, the FTR states that with limited exceptions, travelers must use coach class accommodations for both domestic and international travel. Premium class travel can occur only when the traveler’s agency specifically authorizes the use of such accommodations (authorization) and only under specific circumstances (justification). Specifically, the FTR states that first class accommodation is authorized only when at least one of the following conditions exists: coach class airline accommodations or premium class other than first class airline accommodations are not reasonably available, when use of first class is necessary to accommodate a disability or other special need that is substantiated in writing by a competent medical authority, when exceptional security circumstances require first class travel, or when required because of agency mission. The FTR authorizes premium class accommodations other than first class (business class) when at least one of the following conditions exists: regularly scheduled flights between origin/destination points provide only premium class, and this is certified on the travel voucher; coach class is not available in time to accomplish the mission, which is urgent and cannot be postponed; premium class travel is necessary to accommodate the traveler’s disability or other physical impairment, and the condition is substantiated in writing by competent medical authority; premium class travel is needed for security purposes or because exceptional circumstances make its use essential to the successful performance of the mission; coach class accommodations on authorized/approved foreign carriers do not provide adequate sanitation or meet health standards; premium class accommodations would result in overall savings to the government because of subsistence costs, overtime, or lost productive time that would be incurred while awaiting coach class accommodations; transportation is paid in full by a nonfederal source; travel is to or from a destination outside the continental United States, and the scheduled flight time (including stopovers) is in excess of 14 hours (however, a rest stop en route or a rest period upon arrival is prohibited when travel is authorized by premium class accommodations); or when required because of agency mission. As specified above, employees traveling in premium class have to meet both authorization and justification requirements to qualify, meaning that employees who, for example, traveled premium class on a trip exceeding 14 hours would violate the FTR if they traveled premium class without receiving specific authorization to do so. Agencies subject to the FTR have generally issued internal policies and procedures to clarify the premium class travel provisions of the FTR, implement these provision, or both. When issuing implementing policy, agencies have to follow executive branch policy, which specifies that a subordinate organization seeking to establish implementing regulation or guidance may make the regulations more stringent but not relax the rules established by higher-level guidance. For example, an agency’s implementing policy related to premium class travel because of disability can require that the traveler provides medical certification that is updated annually, but cannot waive the requirement that a certification by a competent medical authority be provided. DOD and State have also issued their own detailed implementing policies and procedures that cover all aspects of travel, from authorization to reimbursement to regulations for premium class. DOD issues the Joint Federal Travel Regulations (JFTR) for uniformed service members not covered by the FTR, and also updates the Joint Travel Regulations (JTR), which implements the FTR for DOD civilian employees. Similarly, State’s Foreign Affairs Handbook and Foreign Affairs Manual (FAM) represent major sets of policies and procedures related to travel reimbursements to Foreign Service employees pursuant to the Foreign Service Act of 1980. With respect to premium class travel, the regulations contained in the JTR, the JFTR, and the FAM are generally consistent with the FTR. For 12 months of travel from July 1, 2005, through June 30, 2006, the government spent more than $230 million on over 53,000 airline tickets that contained at least one leg of premium class travel. Using statistical sampling, we estimated that at least $146 million of this premium class travel was unauthorized or unjustified. In addition, our statistical sample population contained a number of flights taken by government executives. Specifically, we found that senior executives (senior-level executives and presidential appointees with Senate confirmation), who constituted about one-half of 1 percent of the federal workforce, accounted for 15 percent of premium class travel. The government bought more than 53,000 premium class tickets totaling over $230 million during a 12-month period from July 1, 2005, to June 30, 2006. We identified premium class tickets as any ticket that contained at least one leg of travel in first or business class. Since the government did not maintain centralized data on premium class travel, we extracted ticket information from the government credit card banks’ databases of government individually and centrally billed account travel, which included over 6.1 million transactions for airline tickets valued at almost $3.4 billion. Although premium class travel represented less than 1 percent of the total flights taken governmentwide, the high cost of premium class tickets meant that premium class travel accounted for nearly 7 percent of total dollars spent on government airline travel. In some instances, the price difference between a business class and comparable coach class ticket may be negligible—particularly if the traveler traveled within Europe. However, on routes where GSA had awarded a government fare for economy and business classes, an average business class ticket in fiscal year 2006 cost more than 5 times the cost of a coach class ticket. For example, a traveler’s one-way business class ticket from Madrid to Washington, D.C., was over 7 times the price of a coach class ticket. First class tickets can be even more costly. For example, another traveler from the Federal Reserve Board (FRB) flew first class between Washington, D.C., and London for more than $12,000, or more than 16 times the price of a coach class flight. To put the total amount spent on premium class travel into perspective, the over $230 million the government spent on premium class travel during these 12 months exceeded the travel expenses on the government travel cards of most individual government agencies, including major executive agencies such as the Departments of Agriculture, Energy, Health and Human Services, Labor, Transportation, and the Treasury. Premium class travel usage also varied significantly across federal agencies, both in the total amount and frequency with which premium class travel was used. Some agencies spent only a fraction of total airline expenditures on premium class, while premium class travel at other agencies was substantial. Data provided by the travel card banks showed that while DOD was the second largest user of premium class travel based on total dollars, it had substantially reduced its use of premium class travel charged to the government credit cards since 2004, following our DOD premium class audit.Specifically, DOD’s premium class charges decreased from more than $124 million over fiscal years 2001 and 2002, or more than $60 million annually, to slightly over $23 million in the 12-month period ending June 30, 2006—about 1 percent of its air travel expenditures. In contrast, tickets bought by State for foreign affairs agency travelers continued to account for the largest portion of governmentwide premium class travel. Our data showed that the over $140 million that State spent on nearly 30,000 premium class tickets for foreign affairs agency travelers represented over 60 percent of its total air expenditures during this period. This amount, which is comparable to our previous finding at State, could decrease in the future based on actions taken by State based on our previous audit findings and subsequent to the data period we audited. For detailed breakdown of agencies’ overall use of premium class travel, see appendix II. Table 1 shows the results of our analysis of the frequency at which selected agencies purchased premium class tickets for flights involving airports in the United States and locations in Africa, the Middle East, and parts of Europe that likely lasted more than 14 hours. As shown, large differences existed in agencies’ use of premium class flights to these locations. For example, 3 percent of DOD and Department of Homeland Security travelers flying to these locations flew premium class. In contrast, 72 percent of State’s foreign affairs agency travelers and 83 percent of MCC’s travelers flying to these same locations flew premium class. Of the over $230 million in governmentwide premium class travel, we estimated that 67 percent of trips were improperly authorized, justified, or both. In all, the government likely spent at least $146 million on premium class travel that was improper. As shown in table 2, we selected two key transaction-level controls—proper authorization and proper justification— for statistical sampling. Using these attribute tests, we estimated that 28 percent of governmentwide premium class travel was not properly authorized. Because premium class travel must first be authorized before it can be justified, transactions that failed authorization also failed justification. We also estimated, based on statistical sampling, that another 38 percent (a total of 67 percent) of premium class travel was not properly justified. As shown in table 2, 28 percent of governmentwide premium class travel was not properly authorized. Authorization failures fell into the following categories: Blanket travel authorizations. According to the FTR, premium class travel has to be specifically authorized. Consequently, blanket premium class authorization did not pass the specific authorization test. Subordinate authorizations. The FTR does not forbid subordinates from approving their superior’s premium class travel. However, applying the criteria set forth in our internal control standards and sensitive payments guidelines, premium class transactions that were approved by subordinates reduced scrutiny of premium class travel and amounted to self-approval, and thus would fail the control test. No travel authorization. In a number of instances, agencies were not able to provide a travel authorization corresponding to a trip in our sample. Table 2 also shows that an estimated 67 percent of transactions failed the justification test. As the FTR requires specific authorization for all premium class travel, the 28 percent of transactions that failed the authorization test automatically failed the justification test. Thirty-six additional transactions (38 percent) failed justification, mostly due to improper use of the 14 hour-rule. The failures are as follows: In four instances, travelers used the 14-hour rule to justify premium class travel, even though supporting documentation showed that flight time was less than 14 hours. In 29 instances, the traveler had a rest stop en route or rest period upon arrival at the destination city, upon returning home, or both. Travelers with rest stops en route or at destination are not qualified for premium class travel. Despite our request, agencies did not provide supporting documentation in these instances to indicate that the travelers reported to work and thus met the 14-hour criterion. In three instances, the premium class flights failed justification because the agency did not provide us with supporting documentation required by the agency’s own premium class policy as justification for premium class travel. Our Sensitive Payments Guide states that senior government executives are subject to intense scrutiny in the event of “any impropriety or conflict of interest, real or perceived, regardless of how much money, if any, is involved.” According to the guide, travel by high-ranking officials, for example members of the Senior Executive Service (SES), generals, admirals, and political appointees, is a sensitive payment area because it poses a high level of risk of impropriety. However, our sample indicated that high-ranking officials, including SES (career senior executives) and presidential appointees, were using premium class travel at a higher rate than other federal employees. These high-ranking officials made up about one-half of 1 percent of the federal workforce yet accounted for 15 percent of premium class travel in our sample population. As stated previously, we consider premium class authorizations that were signed by subordinates to be tantamount to self-authorization. This is particularly true when travel by government executives is authorized by subordinates. Nevertheless, we found that some premium class flights taken by executives we looked at were approved by subordinates of the travelers. For example, a presidential appointee at the Department of the Treasury (Treasury) took 12 trips during the audit period that were authorized by a subordinate. Our data mining also found instances in which senior executives used mission critical as justification for trips that did not qualify for premium class under the 14-hour rule. Frequently, those trips were authorized by subordinates, and the frequency of abusive travel by executives indicates that in these cases premium class was used as a perquisite for certain senior executives. For example, a senior executive at USDA took 25 premium class trips totaling $163,000 from July 1, 2005, through September 30, 2006. Fifteen of the 25 trips, taken to destinations in Asia and Africa, were justified using the 14-hour rule. The remaining 10 trips to Western Europe were justified using mission critical as the criterion. All of this executive’s premium class authorizations were not properly authorized because they were signed by a subordinate. A weak control environment further exacerbated breakdowns in specific controls that led to at least $146 million in estimated improper premium class travel. Many agencies did not capture data related to business class travel, and therefore did not know the extent of premium class travel. Further, premium class policies and procedures existed that allowed potential abuse of premium class travel. We also found that several government entities, such as USPS, the Federal Deposit Insurance Corporation (FDIC), and FRB, which have their own pay structures and are exempt from the FTR, issued premium class travel guidance that was less restrictive. Consequently, while premium class travel at these agencies was properly authorized and justified according to the agencies’ own policies, many of the trips were taken at increased costs to the taxpayers. The FTR requires all executive branch agencies to provide GSA annual reports listing all instances in which the organizations approved the use of first class transportation accommodations, which GSA then forwards to OMB. However, agencies are not required to report on the use of premium class other than first class, despite the fact that business class travel accounted for nearly 96 percent of premium class travel governmentwide during the 12-month period under audit. We also found that OMB, GSA, and many agencies did not collect data on, and therefore were not aware of, the extent governmentwide use of premium class travel prior to our audit. We found that business class travel is not tracked at the agency level even though it accounts for almost all premium class travel. Officials at several agencies we interviewed generally informed us that they did not track, and thus were not aware of, the extent of their business class travel. Agency officials also cited the lack of a business class reporting requirement as one of the reasons why they did not track business class travel. For example, officials at USDA and Treasury were not aware of the extent of business class travel at their respective agencies. Without knowing how much they spent on premium class travel, agencies cannot effectively manage their travel budgets so that they can prudently safeguard taxpayers’ dollars. We also found that neither OMB nor GSA obtained the data needed to track premium class travel other than first class governmentwide. As a result, the government did not have adequate data with which to identify the extent of any abusive travel in the federal government. Further, without all premium class data, OMB and GSA did not have the means to determine whether agencies are adhering to OMB’s requirements that the taxpayers pay no more than necessary to transport government officials. As a result of similar GAO findings, in early 2006 State responded to GAO’s recommendations by issuing a directive requiring officials at the department to track business class travel. GSA officials informed us that they did not know of any legislative impediment to requiring reporting on business class travel, though they expect that the amount of data would be much more extensive than for first class. GSA officials pointed to a decline in the use of first class travel since OMB started requiring reporting for this class of travel. They told us that the scrutiny associated with reporting requirements may have caused some agencies to restrict first class travel. We found that no single agency, neither GSA nor OMB, has the central responsibility for oversight of premium class policies across federal agencies. Neither GSA nor OMB currently reviews agency policies regarding premium class travel. GSA officials informed us that agencies are expected to manage premium class travel and that they should be provided flexibility to do so. GSA officials saw their role as advisory, that is, they would generally advise when asked as to whether a particular agency was required to follow the FTR, or whether premium class travel could be authorized in specific situations. Officials at GSA informed us that they did not make determinations as to whether an agency implementing guidance adhered to the spirit of the FTR. Similarly, OMB does not oversee agencies’ implementing guidance on premium class travel. Without central oversight, it is therefore not surprising that we found different interpretation and implementation of premium class policies governmentwide. Some agencies, such as DOD, have made policy changes designed to limit the use of premium class travel consistent with the spirit of the FTR. Other agencies, however, have implemented the travel regulations in ways that allow more frequent use of premium class travel. For example, FAS’s and Treasury’s policies allowed employees to use “mission critical” or “exceptional circumstances” as criteria for premium class travel less than 14 hours. In December 2005, in one instance, a FAS executive traveled from Washington, D.C., to Hong Kong and back in business class, a ticket that cost the government over $6,900. However, 11 other FAS employees traveled in coach class at a cost of less than $1,400 per ticket, despite the flight lasting over 14 hours. Data mining we performed at these agencies found that the mission critical criterion is typically used by senior executives to justify less than 10-hour trips to Western Europe. Allowing senior officials to define their travel as mission critical can have a substantial effect on overall travel costs. A department travel policy allowing officials to justify their travel as mission critical contributed to FAS spending nearly $2 million (about 30 percent) of its total air dollars on premium class tickets, with a large proportion going to fund executive premium class travel. We also found that while the FTR requires physician’s certification for premium class travel based on disability, it did not require annual recertification. Consequently, we found an instance where the doctor’s note for a non-life-changing illness was dated 3 years prior to the authorization for premium class travel. The variance in implementing guidance we observed was an important factor in explaining the variances in the use of premium class governmentwide, as shown in table 1. Specifically, we found that premium class travel was taken less frequently at agencies where existing policies and procedures emphasized the importance of minimizing excess travel costs. For example, DOD’s travel policy states that premium class flights over 14 hours would be approved only if the travel is so urgent that it can not be postponed or if alternatives did not exist. In contrast, MCC officials informed us that its procedures permitted automatically providing travelers with premium class for trips over 14 hours, without necessarily requiring specific authorization. A comparison between these two agencies’ use of premium class travel to the same locations found that MCC travelers flew to these locations in premium class 83 percent of the time, compared to DOD’s use of premium class travel to the same locations only 3 percent of the time. Agencies Exempt from the Our audit of premium class travel by selected agencies that are exempt FTR Incurred Costly Premium Class Travel from the FTR found premium class policies that allowed more permissive use of premium class travel, resulting in higher travel costs to the government. For example, we found that some of these agencies’ policies allowed business or first class travel for flights less than 14 hours, and other agencies’ policies allowed premium class travel based on an individual’s position in the organization. For example: At USPS, members of the Board of Governors are allowed to travel first class whenever they fly. For example, a member of the Board of Governors flew first class from Baltimore to San Francisco and back at a cost of $1,900 when a coach class ticket would have cost $500. USPS also allows all other officers to travel in business class overseas, regardless of the length of the flight. At FRB, all members of the board are allowed to travel business class for all international flights and all domestic flights exceeding 5 hours. In addition, there are limited instances in which FRB permits the use of first class. For example, a member of the Board of Governors of the Federal Reserve System and another FRB employee flew first class from Washington, D.C., to London and back at a cost of $25,000. Comparable business class tickets would have cost $12,000 and coach class tickets would have cost $1,500. At FDIC, employees are allowed to travel premium class for international flights over 6 hours. For example a deputy director of FDIC flew business class from Washington, D.C., to London and back at a cost of $7,200 while a coach class ticket would have cost $800. To illustrate the effects of control breakdowns, we also data mined premium class travel data provided by the banks. Based on these techniques, and our statistical sampling, we found numerous examples of premium class travel without authorization or adequate justification. Further, we used data mining to identify the most frequent users of premium class travel. Our analysis of these cases showed that almost all were senior-level employees whose travel, even when properly authorized, generally was not adequately justified. We also identified cases where groups of individuals traveled in premium class together to a single location. However, in the instances we examined, we found no justification showing that all members of the group needed to travel in premium class. Given the high cost of premium class tickets, unnecessary premium class group trips can be very costly to the government. Examples of Improper and Table 3 contains specific examples of abusive travel from both our Abusive Use of Premium Class Travel statistical sample and data mining, all of which were unauthorized, unjustified, or both. These cases illustrate the improper and abusive use of premium class travel. Following the table is more detailed information on some of these cases. Traveler #1 is a special agent with State who flew premium class from Washington, D.C., to Sydney, Australia, and back at a cost of more than $12,000, more than five times as much as a comparable coach class ticket of $2,200. The authorization provided as part of the travel order applied to a different trip. Despite repeated requests, State did not provide us with the proper support for the premium class travel. Consequently, the trip failed both authorization and justification. Traveler #3 is a member of the SES at USDA’s FAS, who flew business class from Washington, D.C., to Zurich and back. The total cost of the business class ticket was $7,500, compared to $900 in coach. The travel orders authorizing premium class travel were signed by the traveler’s subordinate and thus failed the authorization criteria. Further, despite the flight taking less than 14 hours, including a layover, the traveler used the exceptional circumstances criteria, permitted under FAS policy to enable “a senior policy/program official to more effectively carry out the agency mission involving critical trade negotiations, market development, and sales efforts, or sensitive meetings,” to justify the premium class travel. However, FAS policy specifically prohibits the use of business class for travel to destinations in Western Europe. In addition, on the return trip, the traveler took a one-night stopover in London on a Saturday after flying in premium class and then proceeded to Washington the next day. Our data mining of premium class travel from July 1, 2005, through September 30, 2006, found additional examples of abusive premium class travel taken by frequent premium class travelers, often executives. As mentioned previously, some trips taken by executives were approved by their subordinates and were therefore improperly authorized. In addition, trips taken by frequent travelers that were unauthorized and unjustified cost the government up to $100,000 or more per traveler during the 15 month period we audited. More detailed information about some of the cases follows table 4. Traveler #2 was a presidential appointee from Treasury who bought 21 tickets in premium class at a total cost to the government of $129,000. Seven premium class tickets were not specifically authorized, and 12 were authorized by a GS-12 subordinate and were therefore improper. Further, the traveler took three trips in first class despite being specifically authorized for business class travel. Treasury’s implementing guidance provides for premium class travel on trips that are mission critical. We found that on trips of less than 10 hours, the traveler claimed to be preparing briefing materials or reviewing materials en route that justified the use of the mission-critical criteria. Traveler #3 bought 15 premium class tickets costing the government over $100,000 from July 1, 2005, through September 30, 2006. According to the travel orders for these trips, the official had a medical condition that justified the majority of the trips. However, the only documentation of the traveler’s medical condition was a note signed by a peer of the traveler at DOD. According to DOD regulations, flying premium class based on a medical condition requires a physician’s certification. However, DOD could not produce a physician’s statement documenting the traveler’s need to fly premium class. We also found that some agencies’ policies and procedures allowed abusive travel by groups of employees—sometimes as many as 20 employees or more. In particular, we found several instances where groups of employees traveled to overseas destinations to attend meetings, conferences, or trade negotiations. In one instance, a group trip in premium class resulted in about $200,000 in increased costs to the American taxpayers. We also found instances where State and Department of Justice (Justice) authorized employees and their families to travel premium class in permanent change of station (PCS) moves. As reported previously, while State believed such practice to be necessary because it improves employee morale, we question the need to provide premium class travel for PCS. In particular, we note that although federal and State regulations allow premium class travel if the flight is over 14 hours without a rest stop, DOD had issued regulations prohibiting premium class travel for PCS, unless for physical handicap or medical reasons. Specifically, DOD had determined that premium class travel is permitted only for flights over 14 hours if and only if the Temporary Duty Travel (TDY) purpose/mission is so unexpected and urgent it cannot be delayed or postponed, and a rest period cannot be scheduled en route or at the TDY site before starting work. This decision is consistent with the prudent traveler’s principle and with DOD’s new guidelines on the 14-hour rule, issued in early 2006. Table 5 contains specific examples of premium travel by group travelers. More detailed information about some of the cases follows the table. Case study 1 relates to a group of 32 agents who took 40 premium class trips from Washington, D.C., to Liberia to provide security protection to a foreign head of state from January 1, 2006, through June 15, 2006. We found five trips that had no authorization for the travelers to fly in premium class, and three trips that had duplicate tickets covering at least part of the traveler’s itinerary. In addition, we found that 17 travelers arrived back in the United States on Saturday afternoon after flying in business class. There was no evidence any of the travelers went to work before taking a weekend rest period. Case study 2 involves a group of 21 travelers from the United States Trade Representative Office within the Executive Office of the President. The travelers each flew from Washington, D.C., to Hong Kong in December 2005 in business class to attend a World Trade Organization meeting at a total cost to the government of nearly $100,000. The travelers ranged in grade from GS-9 to SES. None of the travelers were authorized to fly premium class, and therefore the use of premium class was improper. According to GSA’s city pair contract for this itinerary, the tickets would have cost about $31,000 in coach class. As shown in table 5, the differences between premium and coach class travel are made even more striking the larger the size of the group that is approved to travel premium class. The high cost of premium class travel by groups necessitates close scrutiny of whether it is absolutely necessary for the whole group to travel in premium class. Even a mix of premium class and coach class accommodations would represent significant savings to the government as opposed to everyone traveling premium class. As such, we are referring all cases of improper and abusive travel we identified to the respective agency management and inspector general’s office for possible administrative actions to be taken against employees who abuse premium class travel use and repayment of the difference between premium class and coach class travel. With the serious fiscal challenges facing the federal government, agencies must maximize their ability to manage and safeguard valuable taxpayers’ dollars. Recognizing the high cost of premium class travel, GSA and federal agencies have issued a series of policies providing that such travel should be taken as a last resort. However, our audit shows that some federal agencies and other federally related entities did not adhere to this policy. In fact, some entities appeared to provide premium class as a perquisite to senior-level executives. Individuals that abusively use premium class travel at taxpayers’ expense should be held accountable for the taxpayer dollars they waste. We are encouraged that DOD has taken steps to significantly curtail unnecessary use of premium class travel, potentially saving millions of dollars. Going forward, it will be important for other agencies to follow DOD’s lead and take steps to restrict the use of premium class travel to only truly exceptional circumstances. and to strengthen monitoring and oversight of premium class travel as part of an overall effort to reduce improper and abusive premium class travel and related government travel costs. We will also issue separate letters to USDA, MCC, Treasury, and FRB on actions needed to address specific control weaknesses we identified through our audit. We recommend that the Director of OMB: Instruct agencies that premium class travel requests for their senior- level executives must be approved by someone at least at the same level as the traveler or an office designated to approve premium class travel for all senior-level executives. Establish policies and procedures to initially require all federal agencies to collect data on the use of all premium class travel, including business class, and submit the information to GSA annually until a risk-based framework is developed. Using the premium class data collected by GSA, consider developing a risk-based framework containing requirements as to: Reporting business class travel to GSA. For example, OMB might want to consider only requiring entities to report business class travel when business class travel exceeds a percentage of total travel. Performing audits of premium class travel programs, including a review of executive travel. We recommend that the Administrator of GSA take actions necessary to help agencies comply with the FTR governing the use and reporting of premium class travel, including the following: Require that agencies develop and issue internal guidance that explains when mission criteria and the intent of that mission call for premium class accommodations. Require agencies to define what constitutes a rest period upon arrival. Require that the physician’s certification related to medical requirements for premium class travel be updated annually unless the physical impairment is a lifelong impairment. Establish an office for travel management within GSA to review agency policies and procedures, identify areas where agency policies and procedures do not adhere to federal regulations, and issue recommendations to agencies to bring their policies and procedures into compliance. Based on the premium class data collected from agencies, determine if clarify guidance concerning authorizing premium class travel only when less costly means of transportation are not practical and limit the use of premium class travel for PCS moves to those necessary as a result of physical handicap, medical reasons, or security reasons, or if the trip is taken at no additional cost to the government. In written comments on a draft of this report, OMB concurred with our recommendations and stated that it is important to educate and remind federal travelers of the policies and regulations that govern federal travel to ensure that the most economic means of travel are used when conducting the government’s business. OMB stated that it is working with GSA to require that premium class travel for senior-level executives be approved by someone at the same level as the traveler, or by an office specifically authorized to approve premium class travel. OMB further stated that GSA is preparing agency guidance for collecting and reporting premium class travel, and that OMB will begin working with GSA and agencies to develop a risk-based reporting and review framework consistent with Appendix A to OMB Circular A-123. In written comments on a draft of this report, GSA concurred with many of our findings and recommendations and said that it will use a number of the report’s findings to improve the FTR to ensure enhanced accountability and control of the use of premium class travel accommodations by federal employees. GSA said that these improvements will include requiring agencies to designate premium class approving officials, requiring agencies to develop internal definitions of mission critical and rest periods, and requiring physician’s notes to be updated unless the physical impairment is a lifelong impairment. GSA will also be collecting business class travel data from agencies starting in fiscal year 2008. GSA did suggest that one of our recommendations could be addressed in a different way than contemplated in the draft report. GSA pointed out that it does not have clear statutory authority to establish central oversight offices for travel management. To address the intent of the recommendation, GSA informed us that with OMB support, it created the Center for Policy Evaluation and Compliance. GSA stated that the Center for Policy Evaluation and Compliance will seek to identify areas within agencies’ policies and procedures that are not consistent with governmentwide standards. The center will be responsible for suggesting improvements to agencies. We have modified the language of this recommendation to adhere to GSA’s current statutory authority and keep the intent of our original recommendation, which is that GSA takes a proactive role in reviewing agency policies and procedures for possible discrepancies with the FTR. GSA’s and OMB’s comments are reprinted in appendixes III and IV. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies of this report to the Director of OMB and the Administrator of GSA. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To assess controls over the authorization and issuance of governmentwide premium class travel, we used premium class travel transactions charged to the federal government’s centrally billed and individually billed accounts during the 12-month period ending June 30, 2006. To assess the magnitude of use of premium class travel, we obtained from Bank of America, Citibank, JP Morgan Chase, and U.S. Bank government travel charge card databases that contained travel transactions charged to the federal government for the 12 months ending June 30, 2006. The databases contained airline transactions and nonairline transactions charged to both the centrally and individually billed travel card accounts. We queried the databases to identify transactions specifically related to travel. The databases also contained transaction-specific information, including the passenger name, the ticket price, and the fare and service codes used to price the tickets purchased. We identified the fare basis codes that corresponded to the issuance of first, business, and coach class travel. Using these codes, we selected all airline transactions that contained at least one leg in which the federal government paid for premium class travel accommodations. We excluded from our audit premium class travel accommodations obtained as a result of upgrades, as these tickets did not result in costs to the federal government. As you requested, our audit covered premium class usage at executive federal agencies and federally related entities. The population under audit consists of transactions by travelers approved to use the government travel card, except for employees and individuals whose travel was approved by legislative or judicial entities and entities covered by treaty with the U.S. government. Agencies included in the audit include executive agencies as described in the Federal Travel Regulations (FTR), including Chief Financial Officers Act agencies, other major executive agencies, independent federally related establishments, and wholly owned government corporations. As further detailed below, we performed statistical sampling on these entities to assess their internal controls and adherence to the FTR. However, to determine whether incidences of costly premium class travel occurred at other federally related entities, we expanded our data mining to premium class transactions of mixed corporations, such as the Federal Deposit Insurance Corporation, and other establishments specifically exempt from the FTR, such as the United States Postal Service. We tested a statistical sample of premium class transactions to assess whether premium class travel was properly authorized and properly justified, and to project the results of these tests to the population of governmentwide premium class travel. The population from which we selected our transactions for testing was the set of debit transactions for both first and business class travel that were charged during the 12 months ending June 30, 2006. Because our objective was to test controls over travel card expenses, we excluded transactions where half or more of the ticket had been refunded. While these trips may not have been properly authorized or justified, the amounts credited back to the government may have been for the premium class portion of the ticket. We also excluded refunded ticket transactions and miscellaneous debits (such as fees) that would not have been for ticket purchases from the population of transactions we reviewed. We further limited the business class transactions to those costing $750 or more because many intra-European flight business class tickets cost less than $750 and are for flights for which there is only a single premium class cabin. By eliminating from our sample business class transactions less than $750, we avoided the possibility of selecting a large number of transactions in which the difference in cost was not significant enough to raise concerns about the effectiveness of the internal controls. While we excluded business class transactions costing less than $750, we (1) did not exclude all intra-European single cabin flights and (2) potentially excluded unauthorized business class flights costing less than $750. Limitations of the database, specifically a lack of visibility between single- and multicabin aircraft, prevented a more precise methodology of excluding lower-cost business class tickets. For security reasons, we did not include in our projection or data mining selections premium class transactions related to agency-identified sensitive assignments and secretive details. To test the implementation of key control activities over the issuance of premium class travel transactions, we selected a random probability sample from the subset of centrally billed and individually billed account transactions containing at least one premium class segment and for which the business class ticket cost at least $750. We initially selected 192 premium class travel transactions. Seventy-nine transactions were excluded because they were out of the scope of the sample. The final sample size of reviewed, in-scope transactions was 96, totaling about $391,000. We overselected initially because of the difficulty of perfectly extracting transactions from all government corporations and establishments that should be excluded from the sample population. For each sample transaction, we requested that the entities provide the travel authorization, travel voucher, travel itinerary, and other related supporting documentations demonstrating justification for premium travel arrangements. We also requested information on the rank or grade of the traveler. Based on the information provided, we assessed whether premium class travel was properly authorized and whether the premium class travel was justified in accordance with the FTR or other applicable travel regulations. If, after repeated requests, the entities did not provide us with the supporting documentation, we concluded that the premium class travel was improper. The results of the samples of these control attributes can be projected to the population of transactions governmentwide, not to any particular individual executive agency, federal corporation, or independent federally related entity. Based on the sampled transactions, we also estimated the percentage of premium class travel taken by federal executives, that is, presidential appointees or members of the Senior Executive Service. With this statistically valid probability sample, each transaction in the population had a probability of being included, and that probability could be computed for any transaction. Each sample element was subsequently weighted in the analysis to account statistically for all the transactions in the population, including those that were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as 95 percent confidence intervals (i.e., plus or minus 10 percentage points). These are the intervals that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from the sample of premium class air travel have sampling errors of plus or minus 10 percentage points or less. In addition to percentage estimates, we also estimate the lower bound for the cost of unauthorized/unjustified premium class travel. This lower bound of $146 million is based on the one-sided 95 percent confidence interval for our sample estimate of $167 million spent on unauthorized premium class travel, unjustified premium class travel, or both. So, based on our sample, we are 95 percent confident that the actual amount is at least $146 million. We performed a limited assessment of the control environment over premium class travel by obtaining an understanding of the premium class travel authorization and ticketing process at selected agencies. We interviewed officials from the General Services Administration (GSA), Department of Defense (DOD), Department of State (State), Department of Agriculture, and Millennium Challenge Corporation. We also reviewed applicable policies and procedures and program guidance that they provided. We used as our primary criteria applicable laws and regulations that address governmentwide premium class travel, including the Office of Management and Budget’s (OMB) mandated controls as implemented by GSA’s FTR; DOD’s Joint Federal Travel Regulations and Joint Travel Regulations for uniformed members and civilian personnel, respectively; as well as State’s Foreign Affairs Manual and Foreign Affairs Handbook, which govern travel of U.S. members of the Foreign Service. We also used as criteria our Standards for Internal Control in the Federal Government and our Guide to Evaluating and Testing Controls over Sensitive Payments. Finally, we conducted “walk-throughs” of the travel process at selected agencies and federally related entities. We also interviewed GSA and OMB officials on their oversight of premium class travel. To determine the frequency with which agencies used premium class travel for flights exceeding 14 hours, we identified airport codes in Africa, the Middle East, and far eastern Europe that would necessitate flights of 14 hours or more if traveling from the United States. We analyzed the banks’ databases to extract flights involving locations in the United States with the selected airports. We then compared the premium class flights to these locations to all flights taken to these locations governmentwide and for selected agencies. In addition to our audit of a governmentwide statistical sample of transactions, we also selected other transactions identified by our data- mining efforts for audit. Our data mining identified additional examples of premium class travel by senior-level executives, individuals who frequently travel using premium class accommodations, and premium trips involving groups with four or more people. For this nonrepresentative data-mining selection, we also requested that the entities provide the travel authorization, travel voucher, travel itinerary, and other related supporting documentations demonstrating justification for premium travel arrangements. If the documentation was not provided, or if it indicated further issues related to the transactions, we obtained and reviewed additional documentation about these transactions. We assessed the reliability of the data provided by the four travel card banks by (1) performing various electronic testing of required data elements, such as transaction amounts and account numbers; (2) reviewing financial statements of the four banks for information about the data and systems that produced them; and (3) interviewing officials knowledgeable about the data at the four banks. In addition, we verified that totals from the databases agreed with the total travel card activity provided to and published in GSA data on travel, in totality and for selected agencies. We determined that data were sufficiently reliable for the purposes of our report. We conducted our audit work from July 2006 through August 2007 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Overall, government travel is managed as part of GSA’s SmartPay program. The SmartPay program began in 1998 as a way to streamline purchasing, as well as providing an expeditious way to pay for travel expenses. Under this program, banks provide travel cards to government agencies and applicable employees for travel purposes. Travel cards provided directly to the agencies are known as the centrally billed accounts, and are typically used to purchase transportation services such as airline and train tickets, facilitate group travel, and pay for other travel-related expenses. The individually billed accounts, provided directly to individual travelers, are used for lodging, rental cars, and in many agencies for transportation services. Four banks provide travel cards under the SmartPay program: Bank of America, Citibank, JP Morgan Chase, and U.S. Bank. According to GSA data, Bank of America and Citibank handle over 94 percent of SmartPay travel card transactions. In the 12 months ending June 2006, total GSA SmartPay travel card purchases totaled about $6.9 billion. Nearly $3.4 billion of the total travel card purchases were for airline travel. Premium class flights accounted for over $230 million, or 7 percent, of the total spent on airline travel. Subsequent to our selection of the statistical sample, the banks provided us with additional data related to premium class travel in the 3 months from July 1, 2006, through September 30, 2006. Our analysis of the additional bank data indicates that premium class travel usage stayed consistent among federal agencies. Table 6 provides information on the premium class travel of selected agencies from July 1, 2005, through June 30, 2006. In addition to the contact above, Tuyet-Quan Thai, Assistant Director; Beverly Burke; Sunny Chang; Paul Desaulniers; Leslie Jones; John Kelly; Barbara Lewis; Mark Ramage; John Ryan; Lindsay Welter; and Scott Wrightson made key contributions to this report.
|
Previous GAO work on widespread improper premium class travel at the Department of Defense (DOD) and the Department of State (State) have led to concerns as to whether similar improper travel exists in the rest of the federal government. Consequently, GAO was asked to (1) determine the magnitude of premium class travel governmentwide and the extent such travel was improper, (2) identify internal control weaknesses that contributed to improper and abusive premium class travel, and (3) report on specific cases of improper and abusive premium class travel. GAO analyzed bank data and performed statistical sampling to quantify the extent premium class travel was improper. GAO also performed data mining, reviewed travel regulations, and interviewed agency officials. Breakdowns in internal controls and a weak control environment resulted in at least $146 million in improper first and business class travel governmentwide. The federal government spent over $230 million on about 53,000 premium class tickets from July 1, 2005, through June 30, 2006. Premium class tickets are costly--for example, a Department of Agriculture (USDA) executive flew business class from Washington, D.C., to Zurich, Switzerland, at a cost of $7,500 compared to $900 for a coach class ticket. Based on statistical sampling, GAO estimated that 67 percent of premium class travel was not properly authorized, justified, or both. While business class travel accounted for 96 percent of all premium class travel, many agencies informed us that they did not track, and thus did not know the extent of, business class travel. OMB and GSA also did not require reporting of business class travel. GAO found large differences in premium class guidance governmentwide, with some agencies issuing less restrictive guidance that were tailored for executive travel. For example, the FTR allows premium class travel for flights over 14 hours if properly authorized. However, executives at the Foreign Agricultural Service frequently used "mission critical" to justify flights to Western Europe that typically lasted less than 10 hours. Other agencies, such as State and the Millennium Challenge Corporation (MCC), automatically approved premium class travel for all flights over 14 hours. GAO's analysis of flights involving destinations in the United States and Africa, the Middle East, and parts of Europe lasting 14 hours or more showed that 72 and 83 percent, respectively, of State's and MCC's flights involving these locations were in premium class. In contrast, 3 percent of all DOD's and the Department of Homeland Security's flights to the same locations were in premium class. There are examples representing specific cases of improper and abusive use of premium class, including employees of entities not subject to the Federal Travel Regulations that have issued policies that resulted in the purchase of costly premium class travel.
|
DOD and OPM’s proposed NSPS regulations would establish a new human resources management system within DOD that governs basic pay, staffing, classification, performance management, labor relations, adverse actions, and employee appeals. We believe that many of the basic principles underlying the proposed DOD regulations are generally consistent with proven approaches to strategic human capital management. Today, I will provide our preliminary observations on selected elements of the proposed regulations in the areas of pay and performance management, staffing and employment, workforce shaping, adverse actions and appeals, and labor-management relations. In January 2004, we released a report on pay for performance for selected OPM personnel demonstration projects that shows the variety of approaches taken in these projects to design and implement pay-for-performance systems. Many of these personnel demonstration projects were conducted within DOD. The experiences of these demonstration projects provide insights into how some organizations in the federal government are implementing pay for performance, and thus can guide DOD as it develops and implements its own approach. These demonstration projects illustrate that understanding how to link pay to performance is very much a work in progress in the federal government and that additional work is needed to ensure that performance management systems are tools to help agencies manage on a day-to-day basis and achieve external results. When DOD first proposed its new civilian personnel reform, we strongly supported the need to expand pay for performance in the federal government. Establishing a clear link between individual pay and performance is essential for maximizing performance and ensuring the accountability of the federal government to the American people. As we have stated before, how pay for performance is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. DOD’s proposed regulations reflect a growing understanding that the federal government needs to fundamentally rethink its current approach to pay and better link pay to individual and organizational performance. To this end, the DOD proposal takes another valuable step toward a modern performance management system as well as a market-based, results-oriented compensation system. My comments on specific provisions of pay and performance management follow. Under the proposed regulations, the DOD performance management system would, among other things, align individual performance expectations with the department’s overall mission and strategic goals, organizational program and policy objectives, annual performance plans, and other measures of performance. However, the proposed regulations do not detail how to achieve such an alignment, which is a vital issue that will need to be addressed as DOD’s efforts in designing and implementing a new personnel system move forward. Our work on public sector performance management efforts in the United States and abroad has underscored the importance of aligning daily operations and activities with organizational results. We have found that organizations often struggle with clearly understanding how what they do on a day-to-day basis contributes to overall organizational results, while high-performing organizations demonstrate their understanding of how the products and services they deliver contribute to results by aligning the performance expectations of top leadership with the organization’s goals and then cascading those expectations to lower levels. A performance management system is critical to successful organizational transformation. As an organization undergoing transformation, DOD can use its proposed performance management system as a vital tool for aligning the organization with desired results and creating a “line of sight” to show how team, unit, and individual performance can contribute to overall organizational results. To help federal agencies transform their culture to be more results oriented, customer focused, and collaborative in nature, we have reported on how a performance management system that defines responsibility and ensures accountability for change can be key to a successful merger and transformation. Under the proposed regulations, DOD would create pay bands for most of its civilian workforce that would replace the 15-grade General Schedule (GS) system now in place for most civil service employees. Specifically, DOD (in coordination with OPM) would establish broad occupational career groups by grouping occupations and positions that are similar in type of work, mission, developmental or career paths, and competencies. Within career groups, DOD would establish pay bands. The proposed regulations do not provide details on the number of career groups or the number of pay bands per career group. The regulations also do not provide details on the criteria that DOD will use to promote individuals from one band to another. These important issues will need to be addressed as DOD moves forward. Pay banding and movement to broader occupational career groups can both facilitate DOD’s movement to a pay-for-performance system and help DOD better define career groups, which in turn can improve the hiring process. In our prior work, we have reported that the current GS system, as defined in the Classification Act of 1949, is a key barrier to comprehensive human capital reform and that the creation of broader occupational job clusters and pay bands would aid other agencies as they seek to modernize their personnel systems. The standards and process of the current classification system are key problems in federal hiring efforts because they are outdated and thus not applicable to today’s occupations and work. Under the proposed regulations, DOD could not reduce employees’ basic rates of pay when converting to pay bands. In addition, the proposed regulations would allow DOD to establish a “control point” within a band that limits increases in the rate of basic pay and may require certain criteria to be met for increases above the control point. The use of control points to manage employees’ progression through the bands can help to ensure that their performance coincides with their salaries and that only the highest performers move into the upper half of the pay band, thereby controlling salary costs. The OPM personnel demonstration projects at China Lake and the Naval Sea Systems Command Warfare Center’s Dahlgren Division have incorporated checkpoints or “speed bumps” in their pay bands. For example, when an employee’s salary at China Lake reaches the midpoint of the pay band, the employee must receive a performance rating that is equivalent to exceeding expectations before he or she can receive additional salary increases. Under the proposed regulations, DOD’s performance management system would promote individual accountability by setting performance expectations and communicating them to employees, holding employees responsible for accomplishing them, and making supervisors and managers responsible for effectively managing the performance of employees under their supervision. While supervisors are supposed to involve employees, insofar as practicable, in setting performance expectations, the final decisions regarding performance expectations are within the sole and exclusive discretion of management. Under the proposed regulations, performance expectations may take several different forms. These include, among others, goals or objectives that set general or specific performance targets at the individual, team, or organizational level; a particular work assignment, including characteristics such as quality, quantity, accuracy, or timeliness; core competencies that an employee is expected to demonstrate on the job; or the contributions that an employee is expected to make. As DOD’s human resources management system design efforts move forward, DOD will need to define, in more detail than is currently provided, how performance expectations will be set, including the degree to which DOD components, managers, and supervisors will have flexibility in setting those expectations. The range of expectations that DOD would consider in setting individual employee performance expectations are generally consistent with those used by high-performing organizations. DOD appropriately recognizes that given the vast diversity of work done in the department, managers and employees need flexibility in crafting specific expectations. However, the experiences of high-performing organizations suggest that DOD should require the use of core competencies as a central feature of its performance management effort. Based on our review of other agency efforts and our own experience at GAO, we have found that core competencies can help reinforce employee behaviors and actions that support the department’s mission, goals, and values, and can provide a consistent message to employees about how they are expected to achieve results. By including such competencies as change management, cultural sensitivity, teamwork and collaboration, and information sharing, DOD would create a shared responsibility for organizational success and help ensure accountability for the transformation process. High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. These organizations make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. DOD’s proposed regulations state that supervisors and managers would be held accountable for making meaningful distinctions among employees based on performance and contribution, fostering and rewarding excellent performance, and addressing poor performance. Under the proposed regulations, DOD is expected to have at least three rating levels for evaluating employee performance. We urge DOD to consider using at least four summary rating levels to allow for greater performance-rating and pay differentiation. This approach is in the spirit of the new governmentwide performance-based pay system for the Senior Executive Service (SES), which requires at least four rating levels to provide a clear and direct link between SES performance and pay as well as to make meaningful distinctions based on relative performance. Cascading this approach to other levels of employees can help DOD recognize and reward employee contributions and achieve the highest levels of individual performance. Although DOD’s proposed regulations provide for some safeguards to ensure fairness and guard against abuse, additional safeguards should be developed. For example, as required by the authorizing legislation, the proposed regulations indicate that DOD’s performance management system must comply with merit system principles and avoid prohibited personnel practices; provide a means for employee involvement in the design and implementation of the system; and, overall, be fair, credible, and transparent. However, the proposed regulations do not offer details on how DOD would (1) promote consistency and provide general oversight of the performance management system to help ensure it is administered in a fair, credible, and transparent manner, and (2) incorporate predecisional internal safeguards that are implemented to help achieve consistency and equity, and ensure nondiscrimination and nonpoliticization of the performance management process. Last month, during testimony, we stated that additional flexibility should have adequate safeguards, including a reasonable degree of transparency with regard to the results of key decisions, whether it be pay, promotions, or other types of actions, while protecting personal privacy. We also suggested that there should be both informal and formal appeal mechanisms within and outside of the organization if individuals feel that there has been abuse or a violation of the policies, procedures, or protected rights of the individual. Internal mechanisms could include independent human capital office and office of opportunity and inclusiveness reviews that provide reasonable assurances that there would be consistency and nondiscrimination. Furthermore, it is of critical importance that the external appeal process be independent, efficient, effective, and credible. In April 2003, when commenting on DOD civilian personnel reforms, we testified that Congress should consider establishing statutory standards that an agency must have in place before it can implement a more performance-based pay program, and we developed an initial list of possible safeguards to help ensure that pay-for-performance systems in the government are fair, effective, and credible. For example, we have noted that agencies need to ensure reasonable transparency and provide appropriate accountability mechanisms in connection with the results of the performance management process. This can be done by publishing the overall results of performance management and individual pay decisions while protecting individual confidentiality and by reporting periodically on internal assessments and employee survey results relating to the performance management system. DOD needs to commit itself to publishing the results of performance management decisions. By publishing the results in a manner that protects individual confidentiality, DOD could provide employees with the information they need to better understand their performance and the performance management system. Several of the demonstration projects have been publishing information about performance appraisal and pay decisions, such as the average performance rating, the average pay increase, and the average award for the organization and for each individual unit, on internal Web sites for use by employees. As DOD’s human resources management system design efforts move forward, DOD will need to define, in more detail than is currently provided, how it plans to review such matters as the establishment and implementation of the performance appraisal systemand, subsequently, performance rating decisions, pay determinations, and promotion actionsbefore these actions are finalized, to ensure they are merit based. The authorizing legislation allows DOD to implement additional hiring flexibilities that would allow it to (1) determine that there is a severe shortage of candidates or a critical hiring need and (2) use direct-hire procedures for these positions. Under current law, OPM, rather than the agency, determines whether there is a severe shortage of candidates or a critical hiring need. DOD’s authorizing legislation permits that DOD merely document the basis for the severe shortage or critical hiring need and then notify OPM of these direct-hire determinations. Direct-hire authority allows an agency to appoint people to positions without adherence to certain competitive examination requirements (such as applying veterans’ preference or numerically rating and ranking candidates based on their experience, training, and education) when there is a severe shortage of qualified candidates or a critical hiring need. In the section containing DOD’s proposed hiring flexibilities, the proposed regulations state that the department will adhere to veterans’ preference principles as well as comply with merit principles and the Title 5 provision dealing with prohibited personnel practices. While we strongly endorse providing agencies with additional tools and flexibilities to attract and retain needed talent, additional analysis may be needed to ensure that any new hiring authorities are consistent with a focus on the protection of employee rights, on merit principles—and on results. Hiring flexibilities alone will not enable federal agencies to bring on board the personnel that are needed to accomplish their missions. Agencies must first conduct gap analyses of the critical skills and competencies needed in their workforces now and in the future, or they may not be able to effectively design strategies to hire, develop, and retain the best possible workforces. The proposed regulations would allow DOD to reduce, realign, and reorganize the department’s workforce through revised RIF procedures. For example, employees would be placed on a retention list in the following order: tenure group (i.e., permanent or temporary appointment), veterans’ preference eligibility (disabled veterans will be given additional priority), level of performance, and length of service; under current regulations, length of service is considered ahead of performance. We have previously testified, prior to the enactment of NSPS, in support of revised RIF procedures that would require much greater consideration of an employee’s performance. Although we support greater consideration of an employee’s performance in RIF procedures, agencies must have modern, effective, and credible performance management systems in place to properly implement such authorities. An agency’s approach to workforce shaping should be oriented toward strategically reducing, realigning, and reorganizing the makeup of its workforce to ensure the orderly transfer of institutional knowledge and achieve mission results. DOD’s proposed regulations include some changes that would allow the department to rightsize the workforce more carefully through greater precision in defining competitive areas, and by reducing the disruption associated with RIF orders as their impact ripples through an organization. For example, under the current regulations, the minimum RIF competitive area is broadly defined as an organization under separate administration in a local commuting area. Under the proposed regulations, DOD would be able to establish a minimum RIF competitive area on a more targeted basis, using one or more of the following factors: geographical location, line of business, product line, organizational unit, and funding line. The proposed regulations also provide DOD with the flexibility to develop additional competitive groupings on the basis of career group, occupational series or specialty, and pay band. At present, DOD can use competitive groups based on employees (1) in the excepted and competitive service, (2) under different excepted service appointment authorities, (3) with different work schedules, (4) pay schedule, or (5) trainee status. These reforms could help DOD approach rightsizing more carefully; however, as I have stated, agencies first need to identify the critical skills and competencies needed in their workforce if they are to effectively implement their new human capital flexibilities. As with DHS’s final regulations, DOD’s proposed regulations are intended to streamline the rules and procedures for taking adverse actions, while ensuring that employees receive due process and fair treatment. The proposed regulations establish a single process for both performance- based and conduct-based actions, and shorten the adverse action process by removing the requirement for a performance improvement plan. In addition, the proposed regulations streamline the appeals process at the Merit Systems Protection Board (MSPB) by shortening the time for filing and processing appeals. Similar to DHS, DOD’s proposed regulations also adopt a higher standard of proof for adverse actions in DOD, requiring the department to meet a “preponderance of the evidence” standard in place of the current “substantial evidence” standard. For performance issues, while this higher standard of evidence means that DOD would face a greater burden of proof than most agencies to pursue these actions, DOD managers are not required to provide employees with performance improvement periods, as is the case for other federal employees. For conduct issues, DOD would face the same burden of proof as most agencies. DOD’s proposed regulations generally preserve the employee’s basic right to appeal decisions to an independent body—the MSPB. However, in contrast to DHS’s final regulations, DOD’s proposed regulations permit an internal DOD review of the initial decisions issued by MSPB adjudicating officials. Under this internal review, DOD can modify or reverse an initial decision or remand the matter back to the adjudicating official for further consideration. Unlike other criteria for review of initial decisions, DOD can modify or reverse an initial MSPB adjudicating official’s decision where the department determines that the decision has a direct and substantial adverse impact on the department’s national security mission. According to DOD, the department needs the authority to review initial MSPB decisions and correct such decisions as appropriate, to ensure that the MSPB interprets NSPS and the proposed regulations in a way that recognizes the critical mission of the department and to ensure that MSPB gives proper deference to such interpretation. However, the proposed regulations do not offer additional details on the department’s internal review process, such as how the review will be conducted and who will conduct them. An internal agency review process this important should be addressed in the regulations rather than in an implementing directive to ensure adequate transparency and employee confidence in the process. Similar to DHS’s final regulations, DOD’s proposed regulations would shorten the notification period before an adverse action can become effective and provide an accelerated MSPB adjudication process. In addition, MSPB would no longer be able to modify a penalty for an adverse action that is imposed on an employee by DOD unless such penalty is so disproportionate to the basis of the action as to be “wholly without justification.” In other words, MSPB has less latitude to modify agency- imposed penalties than under current practice. The DOD proposed regulations also stipulate that MSPB could no longer require that parties enter into settlement discussions, although either party may propose doing so. DOD, like DHS, expressed concerns that settlement should be a completely voluntary decision made by parties on their own initiative. However, settling cases has been an important tool in the past at MSPB, and promotion of settlement at this stage should be encouraged. Similar to DHS’s final regulations, DOD’s proposed regulations would permit the Secretary of Defense to identify specific offenses for which removal is mandatory. Employees alleged to have committed these offenses may receive a written notice only after the Secretary of Defense’s review and approval. These employees will have the same right to a review by an MSPB adjudicating official as is provided to other employees against whom appealable adverse actions are taken. DOD’s proposed regulations only indicate that its employees will be made aware of the mandatory removal offenses. In contrast, the final DHS regulations explicitly provide for publishing a list of the mandatory removal offenses in the Federal Register. We believe that the process for determining and communicating which types of offenses require mandatory removal should be explicit and transparent and involve relevant congressional stakeholders, employees, and employee representatives. Moreover, we suggest that DOD exercise caution when identifying specific removable offenses and the specific punishment. When developing these proposed regulations, DOD should learn from the experience of the Internal Revenue Service’s (IRS) implementation of its mandatory removal provisions. (IRS employees feared that they would be falsely accused by taxpayers and investigated, and had little confidence that they would not be disciplined for making an honest mistake.) We reported that IRS officials believed this provision had a negative impact on employee morale and effectiveness and had a “chilling” effect on IRS frontline enforcement employees, who were afraid to take certain appropriate enforcement actions. Careful drafting of each removable offense is critical to ensure that the provision does not have unintended consequences. DOD’s proposed regulations also would encourage the use of alternative dispute resolution and provide that this approach be subject to collective bargaining to the extent permitted by the proposed labor relations regulations. To resolve disputes in a more efficient, timely, and less adversarial manner, federal agencies have been expanding their human capital programs to include alternative dispute resolution approaches. These approaches include mediation, dispute resolution boards, and ombudsmen. Ombudsmen typically are used to provide an informal alternative to addressing conflicts. We previously reported on common approaches used in ombudsmen offices, including (1) broad responsibility and authority to address almost any workplace issue, (2) their ability to bring systemic issues to management’s attention, and (3) the manner in which they work with other agency offices in providing assistance to employees. The DOD proposed regulations recognize the right of employees to organize and bargain collectively. However, similar to DHS’s final regulations, the proposed regulations would reduce the scope of bargaining by (1) removing the requirement to bargain on matters traditionally referred to as “impact and implementation” (which include the processes used to deploy personnel, assign work, and use technology) and (2) narrowing the scope of issues subject to collective bargaining. A National Security Labor Relations Board would be created that would largely replace the Federal Labor Relations Authority. The proposed board would have at least three members selected by the Secretary of Defense, with one member selected from a list developed in consultation with the Director of OPM. The proposed board would be similar to the internal Homeland Security Labor Relations Board established by the DHS final regulations, except that the Secretary of Defense would not be required to consult with the employee representatives in selecting its members. The proposed board would be responsible for resolving matters related to negotiation disputes, to include the scope of bargaining and the obligation to bargain in good faith, resolving impasses, and questions regarding national consultation rights. Under the proposed regulations, the Secretary of Defense is authorized to appoint and remove individuals who serve on the board. Similar to DHS’s final regulations establishing the Homeland Security Labor Relations Board, DOD’s proposed regulations provide for board member qualification requirements, which emphasize integrity and impartiality. DOD’s proposed regulations, however, do not provide an avenue for any employee representative input into the appointment of board members. DHS regulations do so by requiring that for the appointment of two board members, the Secretary of Homeland Security must consider candidates submitted by labor organizations. Employee perception concerning the independence of this board is critical to the resolution of issues raised over labor relations policies and disputes. Our previous work on individual agencies’ human capital systems has not directly addressed the scope of specific issues that should or should not be subject to collective bargaining and negotiations. At a forum we co-hosted in April 2004 exploring the concept of a governmentwide framework for human capital reform, participants generally agreed that the ability to organize, bargain collectively, and participate in labor organizations is an important principle to be retained in any framework for reform. It also was suggested at the forum that unions must be both willing and able to actively collaborate and coordinate with management if unions are to be effective representatives of their members and real participants in any human capital reform. Once DOD issues its final regulations for its human resources management system, the department will face multiple implementation challenges that include establishing an overall communications strategy, providing adequate resources for the implementation of the new system, involving employees in designing the system, and evaluating DOD’s new human resources management system after it has been implemented. For information on related human capital issues that could potentially affect the implementation of NSPS, see the “Highlights” pages from previous GAO products on DOD civilian personnel issues in appendix I. A significant challenge for DOD is to ensure an effective and ongoing two- way communications strategy, given its size, geographically and culturally diverse audiences, and different command structures across DOD organizations. We have reported that a communications strategy that creates shared expectations about, and reports related progress on, the implementation of the new system is a key practice of a change management initiative. This communications strategy must involve a number of key players, including the Secretary of Defense, and a variety of communication means and mediums. DOD acknowledges that a comprehensive outreach and communications strategy is essential for designing and implementing its new human resources management system, but the proposed regulations do not identify a process for the continuing involvement of employees in the planning, development, and implementation of NSPS. Because the NSPS design process and proposed regulations have received considerable attention, we believe one of the most relevant implementation steps is for DOD to enhance two-way communication between employees, employee representatives, and management. Communication is not only about “pushing the message out,” but also using two-way communication to build effective internal and external partnerships that are vital to the success of any organization. By providing employees with opportunities to communicate concerns and experiences about any change management initiative, management allows employees to feel that their input is acknowledged and important. As it makes plans for implementing NSPS, DOD should facilitate a two-way honest exchange with, and allow for feedback from, employees and other stakeholders. Once it receives this feedback, management needs to consider and use this solicited employee feedback to make any appropriate changes to its implementation. In addition, management needs to close the loop by providing employees with information on why key recommendations were not adopted. Experience has shown that additional resources are necessary to ensure sufficient planning, implementation, training, and evaluation for human capital reform. According to DOD, the implementation of NSPS will result in costs for, among other things, developing and delivering training, modifying automated human resources information systems, and starting up and sustaining the National Security Labor Relations Board. We have found that, based on the data provided by selected OPM personnel demonstration projects, the major cost drivers in implementing pay-for-performance systems are the direct costs associated with salaries and training. DOD estimates that the overall cost associated with implementing NSPS will be approximately $158 million through fiscal year 2008. According to DOD, it has not completed an implementation plan for NSPS, including an information technology plan and a training plan; thus, the full extent of the resources needed to implement NSPS may not be well understood at this time. According to OPM, the increased costs of implementing alternative personnel systems should be acknowledged and budgeted up front. Certain costs, such as those for initial training on the new system, are one- time in nature and should not be built into the base of DOD’s budget. Other costs, such as employees’ salaries, are recurring and thus would be built into the base of DOD’s budget for future years. Therefore, funding for NSPS will warrant close scrutiny by Congress as DOD’s implementation plan evolves. The proposed regulations do not identify a process for the continuing involvement of employees in the planning, development, and implementation of NSPS. However, DOD’s proposed regulations do provide for continuing collaboration with employee representatives. According to DOD, almost two-thirds of its 700,000 civilian employees are represented by 41 different labor unions, including over 1,500 separate bargaining units. In contrast, according to OPM, just under one-third of DHS’s 110,000 federal employees are represented by 16 different labor unions, including 75 separate bargaining units. Similar to DHS’s final regulations, DOD’s proposed regulations about the collaboration process, among other things, would permit the Secretary of Defense to determine (1) the number of employee representatives allowed to engage in the collaboration process, and (2) the extent to which employee representatives are given an opportunity to discuss their views with and submit written comments to DOD officials. In addition, DOD’s proposed regulations indicate that nothing in the continuing collaboration process will affect the right of the Secretary of Defense to determine the content of implementing guidance and to make this guidance effective at any time. DOD’s proposed regulations also will give designated employee representatives an opportunity to be briefed and to comment on the design and results of the new system’s implementation. DHS’s final regulations, however, provide for more extensive involvement of employee representatives. For example, DHS’s final regulations provide for the involvement of employee representatives in identifying the scope, objectives, and methodology to be used in evaluating the new DHS system. The active involvement of employees and employee representatives will be critical to the success of NSPS. We have reported that the involvement of employees and employee representatives both directly and indirectly is crucial to the success of new initiatives, including implementing a pay-for-performance system. High-performing organizations have found that actively involving employees and stakeholders, such as unions or other employee associations, when developing results-oriented performance management systems helps improve employees’ confidence and belief in the fairness of the system and increases their understanding and ownership of organizational goals and objectives. This involvement must be early, active, and continuing if employees are to gain a sense of understanding and ownership of the changes that are being made. The 30-day public comment period on the proposed regulations ended March 16, 2005. DOD and OPM notified the Congress that they are preparing to begin the meet and confer process with employee representatives who provided comments on the proposed regulations. Last month, during testimony, we stated that DOD is at the beginning of a long road, and the meet and confer process has to be meaningful and is critically important because there are many details of the proposed regulations that have not been defined. These details do matter, and how they are defined can have a direct bearing on whether or not the ultimate new human resources management system is both reasoned and reasonable. Evaluating the impact of NSPS will be an ongoing challenge for DOD. This is especially important because DOD’s proposed regulations would give managers more authority and responsibility for managing the new human resources management system. High-performing organizations continually review and revise their human capital management systems based on data-driven lessons learned and changing needs in the work environment. Collecting and analyzing data will be the fundamental building block for measuring the effectiveness of these approaches in support of the mission and goals of the department. DOD’s proposed regulations indicate that DOD will establish procedures for evaluating the regulations and their implementation. We believe that DOD should consider conducting evaluations that are broadly modeled on the evaluation requirements of the OPM demonstration projects. Under the demonstration project authority, agencies must evaluate and periodically report on results, implementation of the demonstration project, cost and benefits, impacts on veterans and other equal employment opportunity groups, adherence to merit system principles, and the extent to which the lessons from the project can be applied governmentwide. A set of balanced measures addressing a range of results, and customer, employee, and external partner issues may also prove beneficial. An evaluation such as this would facilitate congressional oversight; allow for any midcourse corrections; assist DOD in benchmarking its progress with other efforts; and provide for documenting best practices and sharing lessons learned with employees, stakeholders, other federal agencies, and the public. We have work under way to assess DOD’s efforts to design its new human resources management system, including further details on some of the significant challenges, and we expect to issue a report on the results of our work sometime this summer. As we testified previously on the DOD and DHS civilian personnel reforms, an agency should have to demonstrate that it has a modern, effective, credible, and, as appropriate, validated performance management system in place with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure fairness and prevent politicization of the system and abuse of employees before any related flexibilities are operationalized. DOD’s proposed NSPS regulations take a valuable step toward a modern performance management system as well as a more market-based, results-oriented compensation system. DOD’s proposed performance management system is intended to align individual performance and pay with the department’s critical mission requirements; hold employees responsible for accomplishing performance expectations; and provide meaningful distinctions in performance. However, the experiences of high-performing organizations suggest that DOD should require core competencies in its performance management system. The core competencies can serve to reinforce employee behaviors and actions that support the DOD mission, goals, and values and to set expectations for individuals’ roles in DOD’s transformation, creating a shared responsibility for organizational success and ensuring accountability for change. DOD’s overall effort to design and implement a strategic human resources management systemalong with the similar effort of DHScan be particularly instructive for future human capital management, reorganization, and transformation efforts in other federal agencies. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information, please contact Derek B. Stewart, Director, Defense Capabilities and Management, at (202) 512-5559 or [email protected]. For further information on governmentwide human capital issues, please contact Eileen R. Larence, Director, Strategic Issues, at (202) 512-6512 or [email protected]. Major contributors to this testimony include Sandra F. Bell, Renee S. Brown, K. Scott Derrick, William J. Doherty, Clifton G. Douglas, Jr., Barbara L. Joyce, Julia C. Matta, Mark A. Pross, William J. Rigazio, John S. Townes, and Susan K. Woodward. Highlights of GAO-04-753, a report to the Ranking Minority Member, Subcommittee on Readiness, Committee on Armed Services, House of Representatives During its downsizing in the early 1990s, the Department of Defense (DOD) did not focus on strategically reshaping its civilian workforce. GAO was asked to address DOD’s efforts to strategically plan for its future civilian workforce at the Office of the Secretary of Defense (OSD), the military services’ headquarters, and the Defense Logistics Agency (DLA). Specifically, GAO determined: (1) the extent to which civilian strategic workforce plans have been developed and implemented to address future civilian workforce requirements, and (2) the major challenges affecting the development and implementation of these plans. OSD, the service headquarters, and DLA have recently taken steps to develop and implement civilian strategic workforce plans to address future civilian workforce needs, but these plans generally lack some key elements essential to successful workforce planning. As a result, OSD, the military services’ headquarters, and DLA—herein referred to as DOD and the components—do not have comprehensive strategic workforce plans to guide their human capital efforts. None of the plans included analyses of the gaps between critical skills and competencies (a set of behaviors that are critical to work accomplishment) currently needed by the workforce and those that will be needed in the future. Without including gap analyses, DOD and the components may not be able to effectively design strategies to hire, develop, and retain the best possible workforce. Furthermore, none of the plans contained results-oriented performance measures that could provide the data necessary to assess the outcomes of civilian human capital initiatives. GAO recommends that DOD and the components include certain key elements in their civilian strategic workforce plans to guide their human capital efforts. DOD concurred with one of our recommendations, and partially concurred with two others because it believes that the department has undertaken analyses of critical skills gaps and are using strategies and personnel flexibilities to fill identified skills gaps. We cannot verify DOD’s statement because DOD was unable to provide the gap analyses. In addition, we found that the strategies being used by the department have not been derived from analyses of gaps between the current and future critical skills and competencies needed by the workforce. The major challenge that DOD and most of the components face in their efforts to develop and implement strategic workforce plans is their need for information on current competencies and those that will likely be needed in the future. This problem results from DOD’s and the components’ not having developed tools to collect and/or store, and manage data on workforce competencies. Without this information, it not clear whether they are designing and funding workforce strategies that will effectively shape their civilian workforces with the appropriate competencies needed to accomplish future DOD missions. Senior department and component officials all acknowledged this shortfall and told us that they are taking steps to address this challenge. Though these are steps in the right direction, the lack of information on current competencies and future needs is a continuing problem that several organizations, including GAO, have previously identified. www.gao.gov/cgi-bin/getrpt?-GAO-04-753. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or [email protected]. Highlights of GAO-03-851T, testimony before the Committee on Governmental Affairs, United States Senate People are at the heart of an organization’s ability to perform its mission. Yet a key challenge for the Department of Defense (DOD), as for many federal agencies, is to strategically manage its human capital. DOD’s proposed National Security Personnel System would provide for wide-ranging changes in DOD’s civilian personnel pay and performance management and other human capital areas. Given the massive size of DOD, the proposal has important precedent- setting implications for federal human capital management. GAO strongly supports the need for government transformation and the concept of modernizing federal human capital policies both within DOD and for the federal government at large. The federal personnel system is clearly broken in critical respects—designed for a time and workforce of an earlier era and not able to meet the needs and challenges of today’s rapidly changing and knowledge-based environment. The human capital authorities being considered for DOD have far-reaching implications for the way DOD is managed as well as significant precedent-setting implications for the rest of the federal government. GAO is pleased that as the Congress has reviewed DOD’s legislative proposal it has added a number of important safeguards, including many along the lines GAO has been suggesting, that will help DOD maximize its chances of success in addressing its human capital challenges and minimize the risk of failure. This testimony provides GAO’s observations on DOD human capital reform proposals and the need for governmentwide reform. More generally, GAO believes that agency-specific human capital reforms should be enacted to the extent that the problems being addressed and the solutions offered are specific to a particular agency (e.g., military personnel reforms for DOD). Several of the proposed DOD reforms meet this test. In GAO’s view, the relevant sections of the House’s version of the National Defense Authorization Act for Fiscal Year 2004 and the proposal that is being considered as part of this hearing contain a number of important improvements over the initial DOD legislative proposal. www.gao.gov/cgi-bin/getrpt?GAO-03-851T. To view the full testimony, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or [email protected]. Moving forward, GAO believes it would be preferable to employ a governmentwide approach to address human capital issues and the need for certain flexibilities that have broad-based application and serious potential implications for the civil service system, in general, and the Office of Personnel Management, in particular. GAO believes that several of the reforms that DOD is proposing fall into this category (e.g., broad banding, pay for performance, re-employment and pension offset waivers). In these situations, GAO believes it would be both prudent and preferable for the Congress to provide such authorities governmentwide and ensure that appropriate performance management systems and safeguards are in place before the new authorities are implemented by the respective agency. Importantly, employing this approach is not intended to delay action on DOD’s or any other individual agency’s efforts, but rather to accelerate needed human capital reform throughout the federal government in a manner that ensures reasonable consistency on key principles within the overall civilian workforce. This approach also would help to maintain a level playing field among federal agencies in competing for talent and would help avoid further fragmentation within the civil service. People are at the heart of an organization’s ability to perform its mission. Yet, a key challenge for the Department of Defense (DOD), as for many federal agencies, is to strategically manage its human capital. With about 700,000 civilian employees on its payroll, DOD is the second largest federal employer of civilians in the nation. Although downsized 38 percent between fiscal years 1989 and 2002, this workforce has taken on greater roles as a result of DOD’s restructuring and transformation. DOD’s proposed National Security Personnel System (NSPS) would provide for wide-ranging changes in DOD’s civilian personnel pay and performance management, collective bargaining, rightsizing, and other human capital areas. The NSPS would enable DOD to develop and implement a consistent DOD-wide civilian personnel system. Given the massive size of DOD, the proposal has important precedent-setting implications for federal human capital management and OPM. DOD’s lack of attention to force shaping during its downsizing in the early 1990s has resulted in a workforce that is not balanced by age or experience and that puts at risk the orderly transfer of institutional knowledge. Human capital challenges are severe in certain areas. For example, DOD has downsized its acquisition workforce by almost half. More than 50 percent of the workforce will be eligible to retire by 2005. In addition, DOD faces major succession planning challenges at various levels within the department. Also, since 1987, the industrial workforce, such as depot maintenance, has been reduced by about 56 percent, with many of the remaining employees nearing retirement, calling into question the longer-term viability of the workforce. DOD is one of the agencies that has begun to address human capital challenges through strategic human capital planning. For example, in April 2002, DOD published a department wide strategic plan for civilians. Although a positive step toward fostering a more strategic approach toward human capital management, the plan is not fully aligned with the overall mission of the department or results oriented. In addition, it was not integrated with the military and contractor personnel planning. We strongly support the concept of modernizing federal human capital policies within DOD and the federal government at large. Providing reasonable flexibility to management in this critical area is appropriate provided adequate safeguards are in place to prevent abuse. We believe that Congress should consider both governmentwide and selected agency, including DOD, changes to address the pressing human capital issues confronting the federal government. In this regard, many of the basic principles underlying DOD’s civilian human capital proposals have merit and deserve serious consideration. At the same time, many are not unique to DOD and deserve broader consideration. This testimony provides GAO’s preliminary observations on aspects of DOD’s proposal to make changes to its civilian personnel system and discusses the implications of such changes for governmentwide human capital reform. Past reports have contained GAO’s views on what remains to be done to bring about lasting solutions for DOD to strategically manage its human capital. DOD has not always concurred with our recommendations. www.gao.gov/cgi-bin/getrpt?GAO-03-493T. To view the full testimony, including the scope and methodology, click on the link above. For more information, contact Derek B.Stewart at (202) 512-5140 or [email protected]. Agency-specific human capital reforms should be enacted to the extent that the problems being addressed and the solutions offered are specific to a particular agency (e.g., military personnel reforms for DOD). Several of the proposed DOD reforms meet this test. At the same time, we believe that Congress should consider incorporating additional safeguards in connection with several of DOD’s proposed reforms. In our view, it would be preferable to employ a government-wide approach to address certain flexibilities that have broad-based application and serious potential implications for the civil service system, in general, and the Office of Personnel Management (OPM), in particular. We believe that several of the reforms that DOD is proposing fall into this category (e.g., broad-banding, pay for performance, re-employment and pension offset waivers). In these situations, it may be prudent and preferable for the Congress to provide such authorities on a governmentwide basis and in a manner that assures that appropriate performance management systems and safeguards are in place before the new authorities are implemented by the respective agency. However, in all cases whether from a governmentwide authority or agency specific legislation, in our view, such additional authorities should be implemented (or operationalized) only when an agency has the institutional infrastructure in place to make effective use of the new authorities. Based on our experience, while the DOD leadership has the intent and the ability to implement the needed infrastructure, it is not consistently in place within the vast majority of DOD at the present time. DOD is in the midst of a major transformation effort including a number of initiatives to transform its forces and improve its business operations. DOD’s legislative initiative would provide for major changes in civilian and military human capital management, make major adjustments in the DOD acquisition process, affect DOD’s organization structure, and change DOD’s reporting requirements to Congress, among other things. Many of the basic principles underlying DOD’s civilian human capital proposal have merit and deserve serious consideration. The federal personnel system is clearly broken in critical respects—designed for a time and workforce of an earlier era and not able to meet the needs and challenges of our current rapidly changing and knowledge-based environment. DOD’s proposal recognizes that, as GAO has stated and the experiences of leading public sector organizations here and abroad have found, strategic human capital management must be the centerpiece of any serious government transformation effort. DOD’s proposed National Security Personnel System (NSPS) would provide for wide-ranging changes in DOD’s civilian personnel pay and performance management, collective bargaining, rightsizing, and a variety of other human capital areas. The NSPS would enable DOD to develop and implement a consistent DOD-wide civilian personnel system. More generally, from a conceptual standpoint, GAO strongly supports the need to expand broad banding and pay for performance-based systems in the federal government. However, moving too quickly or prematurely at DOD or elsewhere, can significantly raise the risk of doing it wrong. This could also serve to severely set back the legitimate need to move to a more performance- and results-based system for the federal government as a whole. Thus, while it is imperative that we take steps to better link employee pay and other personnel decisions to performance across the federal government, how it is done, when it is done, and the basis on which it is done, can make all the difference in whether or not we are successful. One key need is to modernize performance management systems in executive agencies so that they are capable of supporting more performance-based pay and other personnel decisions. Unfortunately, based on GAO’s past work, most existing federal performance appraisal systems, including a vast majority of DOD’s systems, are not currently designed to support a meaningful performance-based pay system. This testimony provides GAO’s preliminary observations on aspects of DOD’s legislative proposal to make changes to its civilian personnel system and discusses the implications of such changes for governmentwide human capital reform. This testimony summarizes many of the issues discussed in detail before the Subcommittee on Civil Service and Agency Organization, Committee on Government Reform, House of Representatives on April 29, 2003. The critical questions to consider are: should DOD and/or other agencies be granted broad-based exemptions from existing law, and if so, on what basis? Do DOD and other agencies have the institutional infrastructure in place to make effective use of any new authorities? This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals and mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and, importantly, a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms to ensure the fair, effective, and credible implementation of a new system. www.gao.gov/cgi-bin/getrpt?GAO-03-741T. To view the full testimony, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or [email protected]. In GAO’s view, as an alternative to DOD’s proposed approach, Congress should consider providing governmentwide broad banding and pay for performance authorities that DOD and other federal agencies can use provided they can demonstrate that they have a performance management system in place that meets certain statutory standards, that can be certified to by a qualified and independent party, such as OPM, within prescribed timeframes. Congress should also consider establishing a governmentwide fund whereby agencies, based on a sound business case, could apply for funding to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. This approach would serve as a positive step to promote high-performing organizations throughout the federal government while avoiding further human capital policy fragmentation. Between 1987 and 2002, the Department of Defense (DOD) downsized the civilian workforce in 27 key industrial facilities by about 56 percent. Many of the remaining 72,000 workers are nearing retirement. In recent years GAO has identified shortcomings in DOD’s strategic planning and was asked to determine (1) whether DOD has implemented our prior recommendation to develop and implement a depot maintenance strategic plan, (2) the extent to which the services have developed and implemented comprehensive strategic workforce plans, and (3) what challenges adversely affect DOD’s workforce planning. DOD has not implemented our October 2001 recommendation to develop and implement a DOD depot strategic plan that would delineate workloads to be accomplished in each of the services’ depots. The DOD depot system has been a key part of the department’s plan to support military systems in the past, but the increased use of the private sector to perform this work has decreased the role of these activities. While title 10 of the U.S. code requires DOD to retain core capability and also requires that at least 50 percent of depot maintenance funds be spent for public-sector performance, questions remain about the future role of DOD depots. Absent a DOD depot strategic plan, the services have in varying degrees, laid out a framework for strategic depot planning, but this planning is not comprehensive. Questions also remain about the future of arsenals and ammunition plants. GAO reviewed workforce planning efforts for 22 maintenance depots, 3 arsenals, and 2 ammunition plants, which employed about 72,000 civilian workers in fiscal year 2002. GAO recommends that the DOD complete revisions to core policy, promulgate a schedule for completing core computations, and complete depot strategic planning; develop a plan for arsenals and ammunition plants; develop strategic workforce plans; and coordinate the implementation of initiatives to address various workforce challenges. DOD concurred with 7 of our 9 recommendations; nonconcurring with two because it believes the proposed National Security Personnel System, which was submitted to Congress as a part of the DOD transformation legislation, will take care of these problems. We believe it is premature to assume this system will (1) be approved by Congress as proposed and (2) resolve these issues. The services have not developed and implemented strategic workforce plans to position the civilian workforce in DOD industrial activities to meet future requirements. While workforce planning is done for each of the industrial activities, generally it is short-term rather than strategic. Further, workforce planning is lacking in other areas that OPM guidance and high-performing organizations identify as key to successful workforce planning. Service workforce planning efforts (1) usually do not assess the competencies; (2) do not develop comprehensive retention plans; and (3) sometimes do not develop performance measures and evaluate workforce plans. Several challenges adversely affect DOD’s workforce planning for the viability of its civilian depot workforce. First, given the aging depot workforce and the retirement eligibility of over 40 percent of the workforce over the next 5 to 7 years, the services may have difficulty maintaining the depots’ viability. Second, the services are having difficulty implementing multiskilling—an industry and government best practice for improving the flexibility and productivity of the workforce—even though this technique could help depot planners do more with fewer employees. Finally, increased training funding and innovation in the training program will be essential for revitalizing the aging depot workforce. Staffing Levels, Age, and Retirement Eligibility of Civilian Personnel in Industrial Facilities Percent eligible to retire by 2009 www.gao.gov/cgi-bin/getrpt?GAO-03-472. To view the full report, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or [email protected]. Highlights of GAO-03-717T, testimony before the Subcommittee on Civil Service and Agency Organization, Committee on Government Reform, House of Representatives DOD is in the midst of a major transformation effort including a number of initiatives to transform its forces and improve its business operations. DOD’s legislative initiative would provide for major changes in the civilian and military human capital management, make major adjustments in the DOD acquisition process, affect DOD’s organization structure, and change DOD’s reporting requirements to Congress, among other things. Many of the basic principles underlying DOD’s civilian human capital proposals have merit and deserve serious consideration. The federal personnel system is clearly broken in critical respects—designed for a time and workforce of an earlier era and not able to meet the needs and challenges of our current rapidly changing and knowledge-based environment. DOD’s proposal recognizes that, as GAO has stated and the experiences of leading public sector organizations here and abroad have found strategic human capital management must be the centerpiece of any serious government transformation effort. DOD’s proposed National Security Personnel System (NSPS) would provide for wide-ranging changes in DOD’s civilian personnel pay and performance management, collective bargaining, rightsizing, and a variety of other human capital areas. The NSPS would enable DOD to develop and implement a consistent DOD-wide civilian personnel system. More generally, from a conceptual standpoint, GAO strongly supports the need to expand broad banding and pay for performance-based systems in the federal government. However, moving too quickly or prematurely at DOD or elsewhere, can significantly raise the risk of doing it wrong. This could also serve to severely set back the legitimate need to move to a more performance and results- based system for the federal government as a whole. Thus, while it is imperative that we take steps to better link employee pay and other personnel decisions to performance across the federal government, how it is done, when it is done, and the basis on which it is done, can make all the difference in whether or not we are successful. In our view, one key need is to modernize performance management systems in executive agencies so that they are capable of supporting more performance-based pay and other personnel decisions. Unfortunately, based on GAO’s past work, most existing federal performance appraisal systems, including a vast majority of DOD’s systems, are not currently designed to support a meaningful performance-based pay system. This testimony provides GAO’s preliminary observations on aspects of DOD’s legislative proposal to make changes to its civilian personnel system and poses critical questions that need to be considered. The critical questions to consider are: should DOD and/or other agencies be granted broad-based exemptions from existing law, and if so, on what basis; and whether they have the institutional infrastructure in place to make effective use of the new authorities. This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals and mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and, importantly, a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms to ensure the fair, effective, and credible implementation of a new system. www.gao.gov/cgi-bin/getrpt?GAO-03-717T. To view the full report, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or [email protected]. In our view, Congress should consider providing governmentwide broad banding and pay for performance authorities that DOD and other federal agencies can use provided they can demonstrate that they have a performance management system in place that meets certain statutory standards, which can be certified to by a qualified and independent party, such as OPM, within prescribed timeframes. Congress should also consider establishing a governmentwide fund whereby agencies, based on a sound business case, could apply for funding to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. This approach would serve as a positive step to promote high-performing organizations throughout the federal government while avoiding fragmentation within the executive branch in the critical human capital area. The Department of Defense’s (DOD) civilian employees play key roles in such areas as defense policy, intelligence, finance, acquisitions, and weapon systems maintenance. Although downsized 38 percent between fiscal years 1989 and 2002, this workforce has taken on greater roles as a result of DOD’s restructuring and transformation. Responding to congressional concerns about the quality and quantity of, and the strategic planning for the civilian workforce, GAO determined the following for DOD, the military services, and selected defense agencies: (1) the extent of top-level leadership involvement in civilian strategic planning; (2) whether elements in civilian strategic plans are aligned to the overall mission, focused on results, and based on current and future civilian workforce data; and (3) whether civilian and military personnel strategic plans or sourcing initiatives were integrated. Generally, civilian personnel issues appear to be an emerging priority among top leaders in DOD and the defense components. Although DOD began downsizing its civilian workforce more than a decade ago, it did not take action to strategically address challenges affecting the civilian workforce until it issued its civilian human capital strategic plan in April 2002. Top-level leaders in the Air Force, the Marine Corps, the Defense Contract Management Agency, and the Defense Finance Accounting Service have initiated planning efforts and are working in partnership with their civilian human capital professionals to develop and implement civilian strategic plans; such leadership, however, was increasing in the Army and not as evident in the Navy. Also, DOD has not provided guidance on how to integrate the components’ plans with the department-level plan. High-level leadership is critical to directing reforms and obtaining resources for successful implementation. The human capital strategic plans GAO reviewed for the most part lacked key elements found in fully developed plans. Most of the civilian human capital goals, objectives, and initiatives were not explicitly aligned with the overarching missions of the organizations. Consequently, DOD and the components cannot be sure that strategic goals are properly focused on mission achievement. Also, none of the plans contained results-oriented performance measures to assess the impact of their civilian human capital initiatives (i.e., programs, policies, and processes). Thus, DOD and the components cannot gauge the extent to which their human capital initiatives contribute to achieving their organizations’ mission. Finally, the plans did not contain data on the skills and competencies needed to successfully accomplish future missions; therefore, DOD and the components risk not being able to put the right people, in the right place, and at the right time, which can result in diminished accomplishment of the overall defense mission. GAO recommends DOD improve the departmentwide plan to be mission aligned and results- oriented; provide guidance to align component- and department-level human capital strategic plans; develop data on future civilian workforce needs; and set mile- stones for integrating military and civilian workforce plans, taking contractors into consideration. DOD comments were too late to include in this report but are included in GAO-03-690R. Moreover, the civilian strategic plans did not address how the civilian workforce will be integrated with their military counterparts or sourcing initiatives. DOD’s three human capital strategic plans-- two military and one civilian--were prepared separately and were not integrated to form a seamless and comprehensive strategy and did not address how DOD plans to link its human capital initiatives with its sourcing plans, such as efforts to outsource non-core responsibilities. The components’ civilian plans acknowledge a need to integrate planning for civilian and military personnel—taking into consideration contractors—but have not yet done so. Without an integrated strategy, DOD may not effectively and efficiently allocate its scarce resources for optimal readiness. www.gao.gov/cgi-bin/getrpt?GAO-03-475. To view the full report, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-5559 or [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Defense's (DOD) new human resources management system--the National Security Personnel System (NSPS)--will have far-reaching implications for civil service reform across the federal government. The 2004 National Defense Authorization Act gave DOD significant flexibilities for managing more than 700,000 defense civilian employees. Given DOD's massive size, NSPS represents a huge undertaking for DOD. DOD's initial process to design NSPS was problematic; however, DOD adjusted its approach to a more deliberative process that involved more stakeholders. NSPS could, if designed and implemented properly, serve as a model for governmentwide transformation in human capital management. However, if not properly designed and implemented, it could severely impede progress toward a more performance- and results-based system for the federal government as a whole. On February 14, 2005, DOD and the Office of Personnel Management (OPM) released for public comment the proposed NSPS regulations. This testimony provides GAO's preliminary observations on selected provisions of the proposed regulations. Many of the principles underlying the proposed NSPS regulations are generally consistent with proven approaches to strategic human capital management. For instance, the proposed regulations provide for (1) elements of a flexible and contemporary human resources management system--such as pay bands and pay for performance; (2) DOD to rightsize its workforce when implementing reduction-in-force orders by giving greater priority to employee performance in its retention decisions; and (3) continuing collaboration with employee representatives. The 30-day public comment period on the proposed regulations ended March 16, 2005. DOD and OPM have notified the Congress that they are preparing to begin the meet and confer process with employee representatives who provided comments on the proposed regulations. The meet and confer process is critically important because there are many details of the proposed regulations that have not been defined, especially in the areas of pay and performance management, adverse actions and appeals, and labor-management relations. (It should be noted that 10 federal labor unions have filed suit alleging that DOD failed to abide by the statutory requirements to include employee representatives in the development of DOD's new labor relations system authorized as part of NSPS.) GAO has several areas of concern: the proposed regulations do not (1) define the details of the implementation of the system, including such issues as adequate safeguards to help ensure fairness and guard against abuse; (2) require, as GAO believes they should, the use of core competencies to communicate to employees what is expected of them on the job; and (3) identify a process for the continuing involvement of employees in the planning, development, and implementation of NSPS. Also, GAO believes that DOD (1) would benefit if it develops a comprehensive communications strategy that provides for ongoing, meaningful two-way communication that creates shared expectations among employees, employee representatives, and stakeholders and (2) should complete a plan for implementing NSPS to include an information technology plan and a training plan. Until such a plan is completed, the full extent of the resources needed to implement NSPS may not be well understood.
|
DOD is a massive and complex organization. To illustrate, it reported that its fiscal year 2006 operations involved approximately $1.4 trillion in assets and $2.0 trillion in liabilities, more than 2.9 million military and civilian personnel, and $581 billion in net cost of operations. Organizationally, DOD includes the Office of the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that are responsible for either specific geographic regions or specific functions. Figure 1 provides a simplified depiction of DOD’s organizational structure. In support of its military operations, DOD performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the systems environment that supports these business functions is overly complex and error prone, and is characterized by (1) little standardization across DOD, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. The Department of the Navy is a major component of DOD, consisting of two uniformed services: the Navy and the Marine Corps. The department’s mission is to maintain, train, and equip combat-ready naval forces capable of winning wars, deterring aggression, and maintaining freedom of the seas. To support this mission, the department performs a variety of interrelated and interdependent business functions, such as logistics and financial management, relying extensively on IT to carry out its operations. In fiscal year 2006, the department’s budget for IT was $4.3 billion, of which $3.9 billion (90.3 percent) was allocated to operations and maintenance of existing systems and $424 million (9.7 percent) was allocated to systems in development and modernization. The department was appropriated about $4.2 billion in fiscal year 2007 and requested about $4 billion in fiscal year 2008 to operate, maintain, and modernize business systems and associated infrastructures. The Chief Information Officer (CIO) for the department is accountable for all IT business system investments for both the Navy and Marine Corps. The CIO’s office is organized to align and integrate information management and IT programs across the two services and focus departmentwide efforts in support of warfighter priorities. The CIO is supported by Deputy CIOs for the Navy and Marine Corps and a Deputy CIO for Policy and Integration, who directs the operations of the CIO functional teams. The functional teams are led by team leaders who are subject matter experts in their areas of responsibility and are responsible for implementing the goals and objectives outlined in the department’s information management and IT strategic plan, which includes, among other things, ensuring that investments are effectively selected, resourced, and acquired. Figure 2 outlines the department CIO organizational structure. A corporate approach to IT investment management is characteristic of successful public and private organizations. Recognizing this, Congress enacted the Clinger-Cohen Act of 1996, which requires the Office of Management and Budget (OMB) to establish processes to analyze, track, and evaluate the risks and results of major capital investments in IT systems made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB has developed policy and issued guidance for the planning, budgeting, acquisition, and management of federal capital assets. We have also issued guidance in this area that defines institutional structures, such as Investment Review Boards; processes for developing information on investments (such as costs and benefits); and practices to inform management decisions (such as whether a given investment is aligned with an enterprise architecture). IT investment management is a process for linking IT investment decisions to an organization’s strategic objectives and business plans. Consistent with this, the federal approach to IT investment management focuses on selecting, controlling, and evaluating investments in a manner that minimizes risks while maximizing the return on investment. During the selection phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds to any project and (2) selects those IT projects that will best support its mission needs. During the control phase, the organization ensures that projects, as they develop and investment expenditures continue, meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems arise, steps are quickly taken to address the deficiencies. During the evaluation phase, expected results are compared with actual results after a project has been fully implemented. This comparison is done to (1) assess the project’s impact on mission performance, (2) identify any changes or modifications to the project that may be needed, and (3) revise the investment management process based on lessons learned. Our ITIM framework consists of five progressive stages of maturity for any given agency relative to selecting, controlling, and evaluating its investment management capabilities. (See fig. 3 for the five ITIM stages of maturity.) This framework is grounded in our research of IT investment management practices of leading private and public sector organizations. The framework can be used to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment processes that increase business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. We have used the framework in several of our evaluations, and a number of agencies have adopted it. ITIM’s five maturity stages represent steps toward achieving stable and mature processes for managing IT investments. Each stage builds on the lower stages; the successful attainment of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized in order for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. It is not unusual for an organization to be performing key practices from more than one maturity stage at the same time. However, our research has shown that agency efforts to improve investment management capabilities should focus on implementing all lower stage practices before addressing the higher stage practices. In the ITIM framework, Stage 2 critical processes lay the foundation for sound IT investment management by helping the agency to attain successful, predictable, and repeatable investment management processes at the project level. Specifically, Stage 2 encompasses building a sound investment management foundation by establishing basic capabilities for selecting new IT projects. This stage also involves developing the capability to control projects so that they finish predictably within established cost and schedule expectations and developing the capability to identify potential exposures to risk and put in place strategies to mitigate that risk. Further, it involves evaluating completed projects to ensure they meet business needs and collecting lessons learned to improve the IT investment management process. The basic management processes established in Stage 2 lay the foundation for more mature management capabilities in Stage 3, which represents a major step forward in maturity, in which the agency moves from project-centric processes to a portfolio approach, evaluating potential investments by how well they support the agency’s missions, strategies, and goals. Stage 3 requires that an organization continually assess both proposed and ongoing projects as parts of a complete investment portfolio—an integrated and competing set of investment options. It focuses on establishing a consistent, well-defined perspective on the IT investment portfolio and maintaining mature, integrated selection (and reselection), control, and post-implementation evaluation processes. This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than focusing exclusively on the balance between the costs and benefits of individual investments. Organizations that have implemented Stages 2 and 3 practices have capabilities in place that assist in establishing selection; control; and evaluation structures, policies, procedures, and practices that are required by the investment management provisions of the Clinger- Cohen Act. Stages 4 and 5 require the use of evaluation techniques to continuously improve both the investment portfolio and the investment processes in order to better achieve strategic outcomes. At Stage 4, an organization has the capacity to conduct IT succession activities and, therefore, can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough information technologies that will enable it to change and improve its business performance. DOD’s major system investments (i.e., weapons and business systems) are governed by three management systems that focus on defining needs, budgeting for, and acquiring investments to support the mission—the Joint Capabilities Integration and Development System (JCIDS); the Planning, Programming, Budgeting, and Execution (PPBE) system; and the Defense Acquisition System (DAS). In addition, DOD’s business systems are subject to a fourth management system, which, for purposes of this report, we refer to as the Business Investment Management System. For each of these systems, DOD relies on its components to execute the underlying policies and procedures. According to DOD, the four management systems, collectively, are the means by which DOD—and its components—selects, controls, and evaluates its business systems investments. JCIDS is a needs-driven, capabilities-based approach to identify mission needs and meet future joint forces challenges. It is intended to identify future capabilities for DOD; address capability gaps and mission needs recognized by the Joint Chiefs of Staff or derived from strategic guidance, such as the National Security Strategy Report or Quadrennial Defense Review; and identify alternative solutions by considering a range of doctrine, organization, training, materiel, leadership and education, personnel, and facilities solutions. According to DOD, the Joint Chiefs of Staff—through the Joint Requirements Oversight Council—has primary responsibility for defining and implementing JCIDS. All JCIDS documents are submitted to the Joint Chiefs of Staff, which determines whether the proposed system has joint implications or is component-unique. If it is designated as joint interest, then the Joint Requirements Oversight Council is responsible for approving and validating the documents. If it is not designated as having joint interests, the sponsoring component is responsible for validation and approval. PPBE is a calendar-driven approach that is composed of four phases that occur over a moving 2-year cycle. The four phases—planning, programming, budgeting, and executing—define how budgets for each component and DOD as a whole are created, vetted, and executed. As recently reported, the components start programming and budgeting for addressing a JCIDS-identified capability gap or mission need several years before actual product development begins and before the Office of the Secretary of Defense formally reviews the components’ programming and budgeting proposals (i.e., Program Objective Memorandums). Once reviewed and approved, the financial details in the Program Objective Memorandums become part of the President’s budget request to Congress. During budget execution, components may submit program change proposals or budget change proposals, or both (e.g., program cost increases or schedule delays). According to DOD, the Under Secretary of Defense (Policy), the Director for Program Analysis and Evaluation, and the Under Secretary of Defense (Comptroller) have primary responsibility for defining and implementing the PPBE system. DAS is a framework-based approach that is intended to translate mission needs and requirements into stable, affordable, and well-managed acquisition programs, and it consists of five key program life-cycle phases. These five phases are as follows: Concept Refinement: Intended to refine the initial JCIDS-validated system solution (concept) and create a strategy for acquiring the investment solution. A decision is made at the end of this phase (Milestone A decision) regarding whether to move to the next phase (Technology Development). Technology Development: Intended to determine the appropriate set of technologies to be integrated into the investment solution by iteratively assessing the viability of various technologies while simultaneously refining user requirements. Once the technology has been demonstrated in a relevant environment, a decision is made (Milestone B decision) regarding whether to move to the next phase (System Development and Demonstration). System Development and Demonstration: Intended to develop a system or a system increment and demonstrate through developer testing that the system or system increment can function in its target environment. A decision is made at the end of this phase (Milestone C decision) regarding whether to move to the next phase (Production and Deployment). Production and Deployment: Intended to achieve an operational capability that satisfies the mission needs, as verified through independent operational test and evaluation, and ensures that the system is implemented at all applicable locations. Operations and Support: Intended to operationally sustain the system in the most effective manner over its life cycle. A key principle of DAS is that investments are assigned a category, where programs of increasing dollar value and management interest are subject to more stringent oversight. For example, Major Defense Acquisition Programs and Major Automated Information Systems are large, expensive programs subject to the most extensive statutory and regulatory reporting requirements and, unless delegated, are reviewed by acquisition boards at the DOD level. Smaller and less risky acquisitions are generally reviewed at the component executive or lower levels. Another key principle is that DAS requires acquisition management under the direction of a milestone decision authority. The Milestone Decision Authority— with support from the Program Manager and advisory boards, such as the Defense Acquisition Board and the IT Acquisition Board—determines the project’s baseline cost, schedule, and performance commitments. The Under Secretary of Defense for Acquisition, Technology, and Logistics has primary responsibility for defining and implementing DAS. DOD relies on its components to execute these investment management policies and procedures. To implement DOD’s JCIDS process, the Department of the Navy has developed service-level processes—the Naval Capabilities Development Process and the Marine Corps Expeditionary Force Development System—to support the requirements generation process of JCIDS. To implement the PPBE process, department officials stated that they use their budget guidance manual. Finally, to implement the DAS process, the department has developed guidance that outlines a systematic acquisition framework that mirrors the framework defined by DOD and includes the same three event-based milestones and associated five program life-cycle phases. The Business Investment Management System is a calendar-driven approach that is described in terms of governance entities, tiered accountability, and certification reviews and approvals. This system was initiated in 2005, when DOD reassigned responsibility for providing executive leadership for the direction, oversight, and execution of its business systems modernization efforts to several entities. These entities and their responsibilities include the following: The Defense Business Systems Management Committee serves as the highest-ranking governance body for business systems modernization activities. The Principal Staff Assistants serve as the certification authorities for business system modernizations in their respective core business missions. The Investment Review Boards are chartered by the principal staff assistants and are the review and decision-making bodies for business system investments in their respective areas of responsibility. The boards are also responsible for recommending certification for all business system investments costing more than $1 million. The component precertification authority is accountable for the component’s business system investments and acts as the component’s principal point of contact for communication with the Investment Review Boards. The Department of the Navy has designated its CIO to be the Precertification Authority. The Business Transformation Agency is responsible for leading and coordinating business transformation efforts across DOD. The agency is organized into seven directorates, one of which is the Defense Business Systems Acquisition Executive—the component acquisition executive for DOD-wide business systems and initiatives. This directorate is responsible for developing, coordinating, and integrating enterprise-level projects, programs, systems, and initiatives—including managing resources such as fiscal, personnel, and contracts for assigned systems and programs. Figure 4 provides a simplified illustration of the relationships among these entities. According to DOD, in 2005 it also adopted a tiered accountability approach to business transformation. Under this approach, responsibility and accountability for business system investment management is allocated among DOD (i.e., Office of the Secretary of Defense) and the components, based on the amount of development/modernization funding involved and the investment’s “tier.” DOD is responsible for ensuring that all business systems with a development/modernization investment in excess of $1 million are reviewed by the Investment Review Boards for compliance with the business enterprise architecture, certified by the principal staff assistants, and approved by the Defense Business Systems Management Committee. Components are responsible for certifying development/modernization investments with total costs of $1 million or less. All DOD development and modernization efforts are assigned a tier on the basis of the acquisition category or the size of the financial investment, or both. According to DOD, a system is given a tier designation when it passes through the certification process. Table 1 describes the investment tiers and identifies the associated reviewing and approving entities for DOD and the Department of the Navy. DOD’s business investment management system includes two types of reviews for business systems: certification and annual reviews. Certification reviews apply to new modernization projects with total costs over $1 million. These reviews focus on program alignment with the business enterprise architecture and must be completed before components obligate funds for programs. The annual reviews apply to all business programs and are intended to determine whether the system development effort is meeting its milestones and addressing its Investment Review Board certification conditions. Certification reviews and approvals: Tier 1 through 3 business system investments in development and modernization are certified at two levels—components precertify and DOD certifies and approves these system investments. At the component level, program managers prepare, enter, maintain, and update information about their investments in their data repository, such as regulatory compliance reporting, an architectural profile, and requirements for investment certification and annual reviews. The component precertification authority validates that the system information is complete and accessible on the repository, reviews system compliance with the business enterprise architecture and enterprise transition plan, and verifies the economic viability analysis. This information is then transferred to DOD’s IT Portfolio Repository. The precertification authority asserts the status and validity of the investment information by submitting a component precertification letter to the appropriate Investment Review Board for its review. Annual reviews: Tier 1 through 4 business system investments are annually reviewed at the component and DOD-levels. At the component level, program managers annually review and update information on all tiers of system investments that are identified in their data repository. For Tier 1 through 3 systems that are in development or being modernized, information is updated on cost, milestones, and risk variances and actions or issues related to certification conditions. The precertification authority then verifies and submits the information for these business system investments for the DOD Investment Review Board’s review in an annual review assertion letter. The letter addresses system compliance with the DOD business enterprise architecture and the enterprise transition plan and includes investment cost, schedule, and performance information. At the DOD level, the Investment Review Boards annually review investments for certified Tier 1 through 3 business systems that are in development or modernization. These reviews focus on program compliance with the business enterprise architecture, program cost and performance milestones, and progress in meeting certification conditions. The Investment Review Boards can revoke an investment’s certification when the system has significantly failed to achieve performance commitments (i.e., capabilities and costs). When this occurs, the component must address the Investment Review Board’s concerns and resubmit the investment for certification. As stated earlier, DOD relies on its components to execute investment management policies and procedures. The Department of the Navy has developed a precertification process for its business systems, which is intended to ensure that new or existing systems that are being modernized undergo proper scrutiny prior to being precertified by the department’s Precertification Authority. The precertification process is initiated by the Program Manager, who is responsible for completing all data elements required for a specific tier, including entering data and attachments into the department’s repository and entering funding information into the DOD budgeting database. After the precertification package has been completed by the Program Manager, it is to be reviewed by both Functional Area Managers and the Deputy CIOs for the Navy and Marine Corps. The Functional Area Managers’ primary responsibilities are to functionally review data for each defense business system for which they are the lead or stakeholder and ensure that IT and business processes are aligned. The primary responsibilities of the Deputy CIOs are to technically review each defense business system within their service and verify that the system’s architecture complies with the department’s enterprise architecture and the DOD business enterprise architecture. The final task of the Deputy CIO and the Functional Area Managers is to provide a recommendation to the department Precertification Authority as to whether or not the business system should be certified. The reviews of the Deputy CIOs and Functional Area Managers may occur concurrently. Following the Functional Area Manager and Deputy CIO reviews, a business system is to be sent to the department’s CIO for final approval. The CIO is responsible for reviewing Tier 1 through 4 submissions, precertifying Tier 1 through 3 defense business system investments, and certifying Tier 4 investments. The CIO is also responsible for monitoring the activities of the Functional Area Managers and the Deputy CIOs, and for ensuring that functional area manager coordination is effective and sufficient for identifying redundant investments. Once a Tier 1 through 3 investment has been precertified, the CIO is to complete, among other things, a precertification letter and send the certification package to DOD for review by the applicable DOD Investment Review Board and Defense Business Systems Management Committee. Table 2 lists decision-making personnel involved in the department’s investment management process and provides a description of their key responsibilities. Figure 5 shows a simplified overview of the process flow of precertification reviews and approvals for the Department of the Navy. Although DOD relies on its components to execute investment management policies and procedures, the Department of the Navy has not yet established the management structures needed to effectively manage its business system investments or fully developed many of the related policies and procedures outlined in our ITIM framework. Relative to its business system investments, the department has implemented two of the nine key practices that call for project-level management structures, policies, and procedures and none of the five key practices that call for portfolio-level policies and procedures. Department officials stated that they are currently working on guidance to address these weaknesses. For example, the officials stated that they are drafting new portfolio-level policies and procedures and are developing guidance that is intended to assign IT management roles and responsibilities to new or existing boards. The new policies and procedures and guidance are expected to be approved by March 2008. According to our ITIM framework, adequately documenting both the policies and the associated procedures that govern how an organization manages IT projects and investment portfolios is important because doing so provides the basis for having rigor, discipline, and repeatability in how investments are selected and controlled across the entire organization. Until the department establishes the necessary management structure and fully defines policies and procedures for both individual projects and the portfolios of projects, it risks not being able to select and control these business system investments in a consistent and complete manner, which in turn reduces the chances that these investments will meet mission needs in the most effective manner. At ITIM Stage 2, an organization has attained a repeatable and successful IT project-level investment control process and basic selection processes. Through these processes, the organization can identify project expectation gaps early and take the appropriate steps to address them. ITIM Stage 2 critical processes include (1) defining investment board operations, (2) identifying the business needs for each investment, (3) developing a basic process for selecting new proposals and reselecting ongoing investments, (4) developing project-level investment control processes, and (5) collecting information about existing investments to inform investment management decisions. Table 3 describes the purpose of each of these Stage 2 critical processes. Within these five critical processes are nine key practices that call for policies and procedures associated with effective project-level management. The department has fully defined the policies and procedures for two of these nine processes. Specifically, it has policies and procedures for capturing investment information by submitting, updating, and maintaining investment information in its repository and loading information to the DOD repository. Further, the department has assigned its CIO the responsibility of ensuring that information contained in its repository is accurate and complete. However, the management structures and policies and procedures associated with the remaining seven project-level management practices are missing critical elements needed to effectively carry out essential investment management activities. For example: The department has not yet established an Investment Review Board, composed of senior executives from its IT and business units, to define and implement the organization’s IT investment governance process. Without an Investment Review Board, the department’s ability to ensure that investment decisions are consistent and reflect the needs of the organization is limited. The department does not have a documented IT investment management process that completely explains the agency’s selection, control, and evaluation of IT investments. Without such an investment management process, the department may not make consistent decisions regarding its IT investments. The department’s policies and procedures do not explain how ongoing IT investments are periodically reviewed and verified relative to meeting the business needs of its organization and users. Without documenting how officials are to ensure that IT business system investments maintain alignment with the organization’s strategic plans and business goals and objectives, the department cannot ensure a consistent selection of investments that best meet its needs and priorities. The department’s procedures for selecting new investments do not specify how the full range of cost, schedule, and benefit data are used by department officials (CIO, Deputy CIOs, and Functional Area Managers) in making selection decisions. Without documenting how these officials are to consider factors such as cost, schedule, and benefits when making selection decisions, the department cannot ensure that it can consistently and objectively select system investments to best meet its needs and priorities. Policies and procedures do not specify how reselection decisions (i.e., annual review decisions) consider investments that are in operations and maintenance. Without policies and procedures, its ability to make informed and consistent reselection and termination decisions is limited. Policies and procedures do not specify how funding decisions are integrated into the process of selecting an investment. Without considering its budget constraints and opportunities, the department risks making investment decisions that do not effectively consider the relative merits of various projects and systems when funding limitations exist. Policies and procedures for providing oversight into the department’s investment management activities do not specify the processes for decision making during project oversight and do not describe how corrective actions should be taken when the project deviates or varies from the project management plan. Without such policies and procedures, the department risks investing in systems that are duplicative, stovepiped, nonintegrated, and unnecessarily costly to manage, maintain, and operate. Table 4 summarizes our findings relative to the department’s execution of the nine key practices for policies and procedures needed to manage IT investments at the project level. According to department officials, they are aware of the absence of documented policies and procedures in certain areas of project-level management, and plan to issue new policies and procedures addressing these areas by March 2008. However, until the department has documented IT investment management policies and procedures that include fully defined Stage 2 activities, specify the linkages between the various related processes, and describe how investments are to be governed in the operations and maintenance phase, it risks not being able to carry out investment management activities in a consistent and disciplined manner. Moreover, the department risks selecting investments that will not effectively meet its mission needs. At Stage 3, an organization has defined the critical processes for managing its investment as a portfolio or set of portfolios. Portfolio management is a conscious, continuous, and proactive approach to allocating limited resources among competing initiatives in light of the investments’ relative benefits. Taking an agencywide perspective enables an organization to consider its investments comprehensively, so that collectively the investments optimally address the organization’s missions, strategic goals, and objectives. Managing IT investments as portfolios also allows an organization to determine its priorities and make decisions about which projects to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. Although investments may initially be organized into subordinate portfolios—based on, for example, business lines or life-cycle stages—and managed by subordinate Investment Review Boards, they should ultimately be aggregated into enterprise-level portfolios. According to ITIM, Stage 3 involves four critical processes (1) defining the portfolio criteria; (2) creating the portfolio; (3) evaluating (i.e., overseeing) the portfolio; and (4) conducting post-implementation reviews. Within these critical processes are five key practices that call for policies and procedures to ensure effective portfolio management. Table 5 summarizes the purpose of each of these critical processes. The department has not fully defined the policies and procedures needed to effectively execute the five portfolio management practices. Specifically, it does not have policies and procedures for defining the portfolio criteria or assigning responsibility for managing the portfolio criteria. In addition, the department does not have policies and procedures for creating and evaluating the portfolio. Further, it does not have component-level policies and procedures for conducting post- implementation reviews. Table 6 summarizes the rating for each critical process required to manage IT investments as a portfolio and summarizes the evidence that supports these ratings. Department officials agreed that portfolio management is primarily a component responsibility and are aware that they are required to develop and implement a portfolio management capability. Currently, they are developing policy and associated procedures that are intended to address these areas and plan to complete them by March 2008. In the absence of policies and procedures for managing business system investment portfolios, the department is at risk of not consistently selecting the mix of investments that best supports the mission needs and not being able to ensure that investment-related lessons learned are shared and applied departmentwide. Given the importance of business systems modernization to the Department of the Navy’s mission, performance, and outcomes, it is vital for the department to adopt and employ an effective institutional approach to managing business system investments. However, although department officials acknowledged shortcomings and the importance of addressing them, the department has not yet established the management structures needed to effectively manage its business system investments. The department is also missing other important elements, such as specific policies and procedures that are needed for project-level and portfolio- level investment management. In the absence of these essential elements, the department lacks an institutional capability to ensure that it is investing in business systems that best support its strategic needs and that ongoing projects meet cost, schedule, and performance expectations. Until the department develops this capability, it will be impaired in its ability to optimize business mission area performance and accountability. To strengthen the Department of the Navy’s business system investment management capability and address the weaknesses discussed in this report, we recommend that the Secretary of Defense direct the Secretary of the Navy to ensure that well-defined and disciplined business system investment management policies and procedures are developed and issued. At a minimum, this should include instituting project-and portfolio- level policies and procedures that address seven key practices: Establishing an enterprisewide IT Investment Review Board composed of senior executives from IT and business units, including assigning the investment board responsibility, authority, and accountability for programs throughout the investment life cycle. Documenting an investment management process that includes how it is coordinated with JCIDS, PPBE, DAS, and the precertification process. Ensuring that systems in operations and maintenance are aligned with ongoing and future business needs. Selecting new investments, including specifying how cost, schedule, and benefit data are to be used in making decisions and specifying the criteria and steps for prioritizing and selecting these investments. Documenting an annual review process that includes the reselection of ongoing IT investments. Integrating funding with the process of selecting an investment, including specifying how department officials are using funding information in carrying out decisions. Overseeing IT projects and systems, including specifying the processes for the investment boards’ operations and decision making during project oversight. These well-defined and disciplined business system investment management policies and procedures should also include portfolio-level management policies and procedures that address the following five areas: Creating and modifying IT portfolio selection criteria for business system investments. Defining the roles and responsibilities for managing the development and modification of the IT portfolio selection criteria. Analyzing, selecting, and maintaining business system investment portfolios. Reviewing, evaluating, and improving the performance of its portfolios by using project indicators, such as cost, schedule, and risk. Conducting post-implementation reviews for all investment tiers and specifying how conclusions, lessons learned, and recommended management actions are to be shared with executives and others. In written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, DOD partially concurred with our recommendations. It stated that the Department of the Navy has drafted Instruction 8115.02, Information Technology Portfolio Management Implementation, which when finalized, will address our recommendations. According to DOD, the instruction is scheduled to be signed in March 2008. DOD added that it would provide assistance, where appropriate, to the Navy to ensure alignment with enterprise-level portfolio management policies and procedures as they are matured. However, DOD also stated that, based on this pending document from the Department of the Navy, it is the department’s position that a Secretary of Defense directive on the matter will not be required. Our recommendations did not state that DOD should develop a directive; rather, we emphasized the need for the Department of the Navy to develop policies and procedures. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Secretary of the Navy; the Department of the Navy Chief Information Officer; the Commandant of Marine Corps; and the Under Secretary of Defense for Acquisition, Technology, and Logistics. Copies of this report will be made available to other interested parties on request. This report will also be made available at no charge on our Web site at http://www.gao.gov. Should you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to determine whether the investment management approach of the Department of the Navy (a major Department of Defense (DOD) component) is consistent with leading investment management best practices. Our analysis was based on the best practices contained in GAO’s Information Technology Investment Management (ITIM) framework and the framework’s associated evaluation methodology, and focused on the department’s establishment of policies and procedures for business system investments needed to assist organizations in complying with the Clinger-Cohen Act of 1996 (Stages 2 and 3). To address our objective, we asked the department to complete a self- assessment of its investment management process and provide the supporting documentation. We then reviewed the results of the department’s self-assessment of Stages 2 and 3 organizational commitment practices—those practices related to structures, policies, and procedures—and compared them against our ITIM framework. We focused on Stages 2 and 3 because these stages represent the processes needed to meet the standards of the Clinger-Cohen Act, and they establish the foundation for effective acquisition management. We also validated and updated the results of the self-assessment through document reviews and interviews with officials, such as the Director of the Investment Management Team and other staff in the department Chief Information Officer’s office. In doing so, we reviewed written policies, procedures, and guidance and other documentation providing evidence of executed practices, including the Department of the Navy’s Business Information Technology System Precertification Workflow Guidance, Secretary of Navy Instruction 5000.2C, and the Budget Guidance Manual. We compared the evidence collected from our document reviews and interviews with the key practices in ITIM. We rated the key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had met all of the criteria of the key practice. A key practice was rated as “not executed” when we did not find sufficient evidence of all elements of a practice being fully performed or when we determined that there were significant weaknesses in the department’s execution of the key practice. In addition, we provided the agency the opportunity to produce evidence for the key practices rated as “not executed.” We conducted our work at Department of the Navy offices in Arlington, Virginia, from February 2007 through September 2007 in accordance with generally accepted government auditing standards. In addition to the contact person named above, key contributors to this report were Tonia Johnson, Assistant Director; Jacqueline Bauer; Elena Epps; Nancy Glover; and Jeanne Sung.
|
In 1995, GAO first designated the Department of Defense's (DOD) business systems modernization program as "high-risk," and continues to do so today. In 2004, Congress passed legislation reflecting prior GAO recommendations that DOD adopt a corporate approach to information technology (IT) business systems investment management, including tiered accountability for business systems at the department and component levels. To support GAO's legislative mandate to review DOD's efforts, GAO assessed whether the investment management approach of one of DOD's components--the Department of the Navy--is consistent with leading investment management best practices. In doing so, GAO applied its IT Investment Management (ITIM) framework and associated methodology, focusing on the stages related to the investment management provisions of the Clinger-Cohen Act of 1996. The Department of the Navy has yet to establish the management structures needed to effectively manage its business systems investments or to fully develop many of the related policies and procedures outlined in GAO's ITIM framework. The department has implemented two of the nine key practices that call for project-level management structures, policies, and procedures, and none of the five practices that call for portfolio-level policies and procedures. Specifically, it has developed procedures for identifying and collecting information about its business systems to support investment selection and control, and assigned responsibility for ensuring that the information collected during project identification meets the needs of the investment management process. However, the department has not established the management structures needed to support effective investment oversight. It also has not fully documented business system investment policies and procedures for directing Investment Review Board operations, selecting new investments, reselecting ongoing investments, integrating the investment funding and investment selection processes, and developing and maintaining complete business system investment portfolio(s). Department officials stated that they are aware of the lack of an Investment Review Board and the absence of documented policies and procedures in certain areas of project and portfolio-level management, and are currently working on new guidance to address these areas. According to these officials, the new policies and procedures are expected to be approved by March 2008. However, until the department assigns responsibility for overseeing project-level management and portfolio management to a departmentwide review board and fully defines policies and procedures for both individual projects and portfolios of projects, it risks selecting and controlling these business system investments in a way that is inconsistent, incomplete, and ad hoc, which in turn reduces the chances that these investments will meet mission needs in the most effective manner.
|
In the 2001 tax filing season, IRS received more than 70.7 million calls on its three toll-free assistance numbers and answered over 50.5 million calls—assistors answered 22.7 million calls and automated systems answered 27.8 million calls. As in previous years, IRS had three toll-free telephone numbers that taxpayers could call with questions about tax law, taxpayer accounts, and refunds. Located at 26 call sites, IRS has about 10,000 assistors that help taxpayers with a variety of questions ranging from the applicability of tax laws to the status of their accounts. IRS’ call sites are supervised by 10 field directors, each of whom oversees two to three sites. IRS has four measures to evaluate the extent to which taxpayers are provided with accessible telephone assistance, and four to evaluate the extent to which taxpayers are provided with accurate telephone assistance. (For more information on IRS’ telephone assistance access and accuracy measures see app. I.) IRS’ measures of access are based on actual counts of calls using data collected by IRS’ telephone system. IRS’ measures of the accuracy of assistance, the quality and correct response measures, are estimates based on representative samples of nationwide calls that quality assurance staff monitor and score for accuracy. IRS began collecting data on correct responses in June 2000, so there are no data for the 2000 tax filing season to compare with 2001. Over the years, IRS has studied its telephone performance and made changes designed to improve it. For example, in 1999, IRS extended its hours of service to 24 hours a day, 7 days a week. By providing around-the- clock service, IRS expected to distribute demand more evenly and thus improve taxpayers’ access to service. With the increased use of call- routing technology in 1999, IRS began to manage its telephone operations centrally at the Joint Operations Center in Atlanta. Routing calls to the first available assistor who had the necessary skills to answer the taxpayer’s question was expected to improve taxpayers’ access to service and lessen the disparity in the level of service across sites. However, the level of service declined in 1999, and the quality of service was mixed in the 2000 tax filing season and below IRS’ long-term goal of providing world-class customer service. According to IRS, some of the key factors that affected performance in the 2000 tax filing season were the demand for assistance, staffing levels, assistor productivity, assistor skills, and IRS’ guidance for assistors. As we discussed in a previous report, IRS’ analyses did not cover all key management decisions or other key factors that could have affected telephone performance. Additionally, determining how each factor affected performance was made even more difficult because many of the factors are interrelated; changes in one can affect another. The IRS Commissioner has recognized the complex interrelationships within the telephone-operating environment and has stated that years of sustained effort will be required for IRS to achieve its goal of providing world-class telephone service. To address our objectives, we interviewed IRS officials involved in managing toll-free telephone operations and obtained and analyzed supporting documentation as follows: To assess IRS’ performance in responding to calls on the three main telephone assistance toll-free numbers, we compared the 2001 tax filing season performance for accessibility and accuracy measures with IRS’ performance in the 2000 tax filing season and its 2001 performance targets. To assess IRS’ efforts to determine the factors that affected performance in the 2001 tax filing season, including actions it took to improve performance, we used as criteria GPRA and IRS’ own guidance on analyzing performance data. We interviewed IRS officials in the Wage and Investment and Small Business and Self-Employed Divisions, and the Joint Operations Center. We also analyzed various documents, including reports on IRS’ efforts to determine the factors that affect telephone performance and the results of actions to improve performance. In addition, we used a questionnaire to obtain information from the 10 field directors about their efforts to identify the factors that affected performance and assess the effectiveness of actions taken to improve performance. While we did not independently assess the accuracy of IRS’ performance data, we verified that IRS had procedures in place intended to ensure data reliability. We did our work from February 2001 through October 2001 in accordance with generally accepted government auditing standards. IRS made limited progress in the 2001 tax filing season toward its long- term goal of providing world-class telephone service. When compared with the 2000 tax filing season, access and accuracy performance improved by 2 percentage points or less in three of the six comparable measures. The quality of responses to account inquiries increased 10 percentage points, and there was a 4 percentage point decline in callers who hung up while waiting to speak with an assistor; however, taxpayers waited 15-percent longer to speak with an assistor. When compared with 2001 performance targets, IRS did not meet any of its accessibility goals. These targets were intended to move IRS toward its goal of providing world-class service. Although it met or exceeded the 2001 quality targets, IRS did not meet the current year’s higher targets for providing taxpayers with correct responses. Table 1 compares IRS’ actual 2000 performance levels with its 2001 performance levels and targets. (See app. I for more information on the measures.) According to IRS officials, the access measures are similar to those commonly used by world-class customer service organizations. They are designed to focus efforts on enhancing taxpayers’ experience in getting access to assistance. For example, the “assistor level of service” measureis intended to show IRS’ effectiveness in providing callers with access to an assistor. The “assistor response level” is to measure the percentage of taxpayers that waited 30 seconds or less to speak with an assistor. The “abandon rate” measure is to show the percentage of taxpayers who hang up while waiting to speak with an assistor, while the “average speed of answer” measure is to show the average number of seconds taxpayers wait to speak to an assistor. IRS’ accuracy measures are designed to gauge the taxpayers’ experience in getting accurate assistance. The “quality” measures are to show, for a representative sample of calls, the percentage for which assistors followed all procedures, such as properly identifying themselves at the beginning of calls, doing appropriate research on taxpayers’ accounts, and providing accurate information to taxpayers. The new “correct response rate” measures are intended to show the percentage of calls for which IRS assistors provided correct responses to inquiries without taking into account procedural errors that would not affect the accuracy of the information given the taxpayer. IRS began collecting these data in June 2000, so there were no data for the 2000 filing season to compare with 2001. IRS also measures its performance in answering calls through the use of automation. However, we did not consider this measure—“automated service completion rate”—in assessing IRS’ performance because it assumes that callers who get through to TeleTax are served. The TeleTax system does not have data on how many callers hung up before completing an automated service. Although IRS officials recognize that the measure had limitations, according to them, routing refund status calls to TeleTax allowed IRS to answer about 11.3 million more calls made to its three main toll-free assistance numbers as compared with the 2000 tax filing season. We are continuing to assess many of IRS’ new performance measures, including those used to evaluate telephone assistance. IRS missed some opportunities to better understand the factors that affected performance and to plan evaluations of the actions it took to improve performance. GPRA and IRS guidance outline the benefits of first gathering and then analyzing data to help managers understand the reasons for performance. To this end, IRS collected performance data on access and conducted some analyses. Even so, IRS missed opportunities to do other analyses of the factors affecting performance including actions to improvement it. Contributing to the missed opportunities was a lack of planning for the evaluation of those actions. IRS has a variety of systems in place to make data on the access and accuracy of telephone assistance available to managers. Two of these systems are its Joint Operations Center in Atlanta and Centralized Quality Review System (CQRS) in Philadelphia. Managers at call sites also collect data on factors affecting access and accuracy performance. IRS’ Joint Operations Center in Atlanta manages the activities of the 26 call sites, including monitoring access data and routing calls to the next available assistor anywhere in the country. The Center collects data on various accessibility measures and makes those data available daily to IRS managers through an internal Web site. According to IRS officials, IRS improved the collection of performance data in the 2001 tax filing season. For example, IRS implemented the Enterprise Telephone Database to provide a central call information database. The database was designed to provide IRS analysts and management with the most accurate information for analysis and program decision-making by centralizing data collection and producing a standard set of management reports. IRS’ CQRS staff in Philadelphia are responsible for collecting data on the accuracy of telephone assistance. CQRS provides call-site officials with daily access through its internal Web site on the results of the sample of calls answered by their sites. It also provides weekly and monthly reports on the quality of sites’ responses to taxpayers’ questions about tax law or about their accounts—two of the accuracy performance measures. These data show the call sites what errors assistors are making so site managers can quickly take action to reduce these errors. IRS officials told us that they made better use of the data in the 2001 filing season. They said that Wage and Investment Division and site officials developed strategies to reduce assistor errors based on CQRS reports. IRS call sites collect data on factors affecting access and accuracy in various ways. For example, supervisors use real-time data and historical reports available at the call sites on how assistors spent their time, including average handle time—the time an assistor spends talking with the taxpayer, keeping the taxpayer on hold, and finishing the call and indicating readiness to receive another call. Also, local staff monitor calls to provide more detailed information on what errors assistors are making and in what units the errors are being made. IRS, both at the national and call-site levels, conducted some analyses of performance data intended to determine the factors affecting performance. GPRA and IRS guidance stress that analysis is a key part of understanding performance and identifying improvement options. Analysis of performance data is intended to help managers understand changes in performance, determine root causes, and identify improvement options. We identified several examples of analytical efforts to determine the factors affecting performance at both the national and call-site levels. In one example with regard to access, IRS officials analyzed data provided by the Joint Operations Center to determine the reasons for the lower-than- expected level of service in the first 3 months of fiscal year 2001. They concluded that declining assistors’ productivity, as measured by average handle time, was the major reason for the decline in access. Officials from the operating divisions and the call sites conducted a series of assessments to determine the underlying reasons for the increase in average handle time. The assessments included focus groups with managers and employees to solicit their views on productivity and monitoring of telephone calls to determine how assistors use the time between calls. The assessments identified three major categories of factors that had negatively affected average handle time: management practices, work processes, and computer systems. According to IRS officials, some management practices adversely affected the level of service because managers did not take actions to improve assistors’ use of time between calls, the primary factor that increased average handle time. IRS officials said that they took immediate corrective actions, such as briefing assistors and supervisors and eliminating unnecessary data entry and taxpayer notification requirements. They also organized teams to further evaluate and resolve the more complicated work process and computer systems issues. According to IRS officials, IRS improved the analysis of Joint Operations Center performance data in the 2001 tax filing season. For example, analysts studied the factors that affected the demand for live assistance regarding refunds, including the impact of increased electronic filing. Center analysts also began developing quantitative models of the time taxpayers and IRS spend on telephone questions, with the intent to better match IRS’ resources with taxpayer needs. Regarding accuracy, CQRS staff analyzed assistor errors and made nationwide and individual site suggestions for addressing the causes of the errors. The suggestions included changes to the assistors’ training and guidance. Also, IRS field directors conducted some analyses to determine the factors that affected access and accuracy. For example, one director said analysis staff at one of her sites was doing a study to determine if the site’s extensive use of faxing negatively affected access. She said she believed the site’s average handle time was longer than others owing to the site’s policy to keep the taxpayer on the line until the accounts issue was resolved, even while the taxpayer faxed documents. Another director said he monitored assistors to determine whether the computer-based research tools assistors used to answer taxpayers’ questions met assistors’ needs. Although IRS conducted some analyses of performance data, it missed opportunities to do other analyses at the field level that could have provided a better understanding of the factors affecting telephone assistance performance, including the actions it took to improve performance. Identifying the key factors that most affect performance is important, yet difficult because those factors that can affect telephone access and accuracy are often numerous and interrelated. IRS guidance recognizes that there are a variety of approaches to conducting analyses, such as hypothesis testing, which involves forming a tentative conclusion that is tested using the data. We recognize that some analyses can be costly, but as already noted, GPRA and IRS guidance stress that analysis is a key part of identifying improvement options. Field directors sometimes reached conclusions about the factors affecting access and accuracy without conducting analyses to test their conclusions. Seven of 10 directors said that the relative inexperience of assistors, caused primarily by higher-than-usual attrition, was a key factor affecting performance in the 2001 tax filing season. They said many experienced seasonal assistors had taken permanent positions in other parts of IRS, and the new hires who replaced them tended to take longer on calls and make more errors. The second most common factor they cited was problems with the computer-based research tools that assistors used to answer taxpayers’ questions. Five of 10 directors cited such problems, including difficulties in using the Servicewide Electronic Research Project to search the Internal Revenue Manual, the assistors’ primary guidance for handling calls regarding taxpayers’ accounts. Some directors said the computer systems were cumbersome and difficult to navigate, causing assistors to take longer on calls and make errors, and some said computer systems often failed and thus hampered assistors’ ability to research questions. Although directors cited high attrition and computer problems as key factors affecting performance, only two directors identified a specific analysis to support their conclusions. These directors said that focused monitoring was done at their sites that confirmed the limitations of a computer system assistors used to answer taxpayers’ questions. When we asked other directors whether they or their staff had conducted analyses to confirm or refute their conclusions about the factors that affected performance, they acknowledged that they had not. We identified several opportunities to conduct analyses of performance. One way directors could have analyzed the impact of high attrition on access and accuracy would have been to monitor a sample of calls handled by experienced and inexperienced assistors to compare error rates and average handle time. One director acknowledged that her analysis staff could have done more to learn about how accepting additional calls from businesses affected performance, such as comparing the handle time for business taxpayer calls with individual taxpayer calls. In another example, the program manager for the new Accounts Resolution Guide, a computer-based, step-by-step guide on how to resolve an account-related telephone call, agreed that more could have been done to evaluate the guide’s effectiveness. For example, local managers could have observed and compared assistors who used and did not use the guide. According to IRS officials, additional analyses such as these could have been done at relatively low cost. Analyzing performance data can be important for several reasons. First, there can be disagreement about which actions improve performance. For example, some directors cited the Accounts Resolution Guide as a reason for the significant improvement in their accounts quality rate. However, another director said that the guide actually had a negative effect on accounts quality, saying that because the guide was new to some assistors, the “learning curve” to become proficient in using the guide caused assistors to make errors. Second, when multiple factors affect performance, knowing the extent to which each factor has an impact can help managers decide where to focus scarce managerial attention. For example, the solutions for addressing high attrition and computer problems are likely to be different. Understanding the relative importance of high attrition and computer problems could help prioritize improvement actions. In addition, in the case of multiple factors, IRS’ use of performance measures to determine the effect of one factor without controlling for other factors can be misleading. In one case, a field director noted the risks of using average handle time as an indicator of the effectiveness of actions taken to improve the productivity of telephone assistors. The director noted that other factors, such as the complexity of calls handled, could also affect average handle time. Third, the interrelationship among factors makes it difficult to determine which factors most affect performance. For example, as we reported last year, the quality of guidance assistors use can affect not only the accuracy but also the accessibility of telephone assistance. Although step-by-step guidance on how to respond to questions would likely improve accuracy, it could also cause assistors to take more time answering calls, thereby negatively affecting taxpayers’ access to service. As we previously reported, conducting systematic analyses of program performance is important for determining the factors affecting performance and identifying opportunities for improvement. IRS guidance states that analysis to understand the underlying factors influencing the performance reflected in the balanced measures is necessary to determine how to improve performance and warns that managers should not “jump to conclusions” about the causes of performance problems. As we said in a report on management reform, “an organization cannot improve performance and customer satisfaction if it does not know what it does that causes current levels of performance and customer satisfaction.” Because the factors affecting telephone performance are numerous and are often interrelated, conducting analyses is essential to determining the factors that have the most effect on performance so that corrective actions can be targeted toward those factors. We recognize that some analysis can be costly. Consequently, the costs need to be balanced against the benefits. Considering that IRS devotes significant resources (about 10,000 assistors) to telephone assistance, the benefits of analysis—identifying ways to more effectively use resources and improve service—could be substantial. IRS missed opportunities to plan evaluations to determine the effectiveness of actions it took to improve the access and accuracy of its telephone assistance. IRS guidance presents a seven-step process designed to guide data collection and analysis to identify ways to improve performance. The last step states that managers should establish a plan that tracks the effectiveness of actions taken to improve performance. Without such a plan, IRS may not collect the data needed to judge the action’s effectiveness. Additionally, planning to collect the data before the improvement action is implemented may be less costly than developing the data and evaluating the action later. IRS field directors cited several different actions they took to address factors that negatively affected access and accuracy in the prior filing season: Assistor skill gaps (the difference between the skills assistors had and the skills needed by IRS). To address skill gaps, field directors most frequently cited training as the action taken, with all 10 directors referring to training as the primary, and most often only, action taken. Although training was designed at the division level, field directors and managers were responsible for implementing it in the field, such as selecting the trainers and determining which assistors need to be trained. Errors caused by flaws in the guidance assistors used to respond to taxpayers’ account questions. To address the flaws in assistors’ guidance for answering taxpayer calls, 5 of the 10 field directors cited the implementation and use of computer-based tools to improve guidance, including the Accounts Resolution Guide. Declining assistor productivity. All 10 field directors said that the primary actions taken to address assistor productivity declines were nationwide managers’ training and employee briefings. Although field directors said all three of these actions were key to improving telephone assistance performance this year, none of the field directors cited specific evaluations or plans for assessing the effectiveness of these actions. Instead, field directors based their assessment of actions on performance trends, not taking into account the multiple factors or the interrelatedness of factors that can affect a performance measure. One example of a missed opportunity to plan an evaluation of an improvement action on the national level is the lack of a systematic plan to assess the impact the Accounts Resolution Guide had on access and accuracy. The program manager for the guide said that IRS did not develop such a plan because assistors were not required to use the guide and IRS’ remote monitoring system was unable to determine when the guide was used. The program manager agreed that local managers could have done more to evaluate its effectiveness because they could have observed and compared the results of assistors who did or did not use the guide. Having an evaluation plan when the new Accounts Resolution Guide was distributed to the field would have provided local managers with guidance on the type of data to collect. Another example of a missed opportunity is the lack of evaluation plans in filing season readiness plans. IRS field directors complete a standard plan each year, adding any items unique to their sites, to ensure that the sites have taken all the necessary steps to provide phone assistance in the tax filing season. Such steps include providing appropriate training and having the equipment and guidance assistors need to respond to taxpayer calls. The readiness plans we reviewed, however, did not include steps to ensure that sites collect and analyze data to evaluate the effectiveness of any improvement actions. IRS officials noted that since some of the improvement actions were national in scope, field directors would not have been individually responsible for evaluating the effectiveness of the actions. We recognize that evaluations of national improvement actions, such as the Accounts Resolution Guide and managers’ training and employee briefings, to address productivity may involve the higher levels in IRS that are responsible for the action. Accordingly, as noted above, we discussed the Guide with its national program manager and were told that no systematic evaluation was done. We also discussed the actions to address productivity with division-level officials. Similar to the field directors, division officials evaluated the actions by monitoring trends in average handle time and comparing average handle time with previous performance. IRS made limited progress in the 2001 tax filing season toward its long- term goal of providing world-class customer service. To speed progress toward its long-term goal, IRS managers need to identify the causes for performance, plan strategies to improve performance, and evaluate how well those strategies worked. Unfortunately, IRS sometimes missed opportunities to conduct analysis to help managers understand the reasons for performance and to evaluate actions taken to improve performance. The decision on the type of analysis to be done and who will do it should consider the costs and benefits of the analysis and which organizational levels are most responsible for the factor or improvement action being analyzed. We recognize that some analyses can be costly; however, some of the missed opportunities were low-cost and some involved key factors affecting actions taken to improve performance. In addition, there are costs in not using the 10,000 assistors as effectively as possible. Considering IRS’ limited progress, it cannot afford to miss opportunities without determining the most effective use of its resources to improve performance. We recommend that the Commissioner ensure that managers follow IRS guidance on analyzing the factors that affect performance and evaluating improvement actions. Specifically, we recommend that (1) field directors be required to develop and follow written plans to collect and analyze data to test their conclusions about the key local factors affecting performance and, when appropriate, evaluate local improvement actions, such as actions involving training; (2) field directors include in filing season readiness plans a step to ensure that site managers have plans to evaluate the effectiveness of any local improvement actions; and (3) program managers and other appropriate national officials be required to develop and follow written plans to evaluate the effectiveness of key national improvement actions, such as the Accounts Resolution Guide. The Commissioner of Internal Revenue provided written comments on a draft of this report in a December 3, 2001, letter, which is reprinted in appendix II. The Commissioner stated that while the report categorized IRS’ progress toward providing world-class telephone assistance as limited, he is confident that IRS is moving in the right direction. He noted that IRS had initiated a number of strategies to improve telephone assistance and agreed with our recommendation. Specifically, he agreed that IRS needs “better testing, documentation, and analytical activities to determine the factors that affect performance and assess the results of our improvement actions.” In his comments, the Commissioner noted that the report focuses on accessibility to telephone assistors and stated that IRS also assists taxpayers through automated telephone services and other means, such as IRS’ Internet Web site and walk-in Tax Assistance Centers. As noted in the report, we did not assess IRS’ automated telephone services because IRS’ method of measuring its performance in providing automated services had limitations. The measure assumed that all callers that go through one of its automated systems—TeleTax—were served because the TeleTax system does not have data on how many taxpayers hung up before completing an automated service. Other taxpayer services, such as walk-in assistance, are to be addressed in our upcoming report on various aspects of the 2001 tax filing season. As agreed with your staff, unless you publicly release its contents earlier, we will make no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Committee on Ways and Means and the Ranking Minority Member of the Subcommittee. We will also send copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. If you have any questions or would like additional information, please call me at (202) 512-9110 or Carl Harris at (404) 679-1900. Key contributors to this report are Ronald W. Jones and Ronald J. Heisterkamp.
|
Congress has long been concerned about the quality of service that taxpayers receive when calling the Internal Revenue Service (IRS) for help in understanding and meeting their tax obligations. IRS has taken steps to improve its responsiveness to the tens of millions of telephone calls it receives each year, from expanding the hours of service to increasing the use of automation. In the 2000 tax filing season, the quality of telephone assistance was mixed and below IRS' long-term goal of providing world-class service. Overall, IRS made limited progress toward its goal of providing world-class telephone service. When compared with the 2000 tax filing season, access and accuracy in 2001 improved in two of six comparable measures, declined in one, and changed two percentage points or less in the others. IRS fell considerably short of its target to reduce the time that taxpayers spend waiting to speak with an assistor. Although assistors exceeded quality-of-service targets when responding to taxpayer questions, they did not meet higher targets for providing correct answers and account adjustments. IRS officials missed opportunities to analyze data to better understand the factors affecting telephone performance, including the actions it took to improve performance. IRS managers sometimes reached conclusions about key factors without conducting analyses to test their conclusions. IRS officials also missed opportunities to plan evaluations to determine the effectiveness of the actions taken to improve access and accuracy.
|
In light of the federal government’s long-term fiscal challenges, it is critical that agencies can justify the needed resources and develop effective, efficient strategies to achieve their mission. We testified in January 2008 that, while FDA officials had acknowledged that implementing the Food Protection Plan would require additional resources, FDA had not provided specific information on the resources it anticipates the agency will need to implement this plan to improve its oversight of food safety. For example, the Food Protection Plan proposes to enhance FDA’s information technology systems related to both domestic and imported foods which the Science Board report suggests could cost hundreds of millions of dollars. At that time, FDA officials stated they would provide specific information on how much additional funding would be necessary to implement the Food Protection Plan when the President’s budget was publicly released in the coming weeks. In its fiscal year 2008 budget, FDA received approximately $620 million for food protection, an increase of about $56 million over fiscal year 2007, and directed $48 million of that amount toward implementing the Food Protection Plan, according to FDA. FDA requested approximately $662 million for food safety for fiscal year 2009, an increase of about $42 million over fiscal year 2008. According to the Department of Health and Human Services’ budget justification, FDA plans to direct the $42 million to strategic actions described in its Food Protection Plan. As shown in table 1, the plan outlines spending on all three core elements of the Food Protection Plan––a total of about $21 million for prevention, about $34 million for intervention, and about $23 million for response for fiscal years 2008 and 2009. FDA also reported that, in fiscal year 2008, the agency intends to hire nearly 1,500 full time equivalents (FTE), including approximately 730 to fill vacant positions. Of these, 161 will be new FTEs funded by congressional increases dedicated to food safety activities. In addition, in fiscal year 2009, FDA plans to hire 94 new FTEs for food safety activities. Furthermore, in May 2008, FDA’s Commissioner of Food and Drugs provided his professional judgment in response to a congressional request of FDA’s immediate resource needs to implement key initiatives across the core elements of the Food Protection Plan. The Commissioner called for an additional $125 million for food protection in fiscal year 2008 beyond the $48 million that FDA had already allocated for implementing the Food Protection Plan in this fiscal year. According to the Commissioner, this increase will allow FDA to address some of the plan’s strategic actions, such as identifying and targeting the greatest threats from intentional and unintentional contamination and conducting more risk-based inspections. The Commissioner’s assessment also calls for 250 additional FTEs to accomplish the goals of the Food Protection Plan. After the Commissioner provided his assessment of FDA’s resource needs, the Senate passed an Iraq War Supplemental that included an additional $119 million for food safety to be available through fiscal year 2009. In addition, on June 9, 2008, the Department of Health and Human Services announced that the Administration is amending its fiscal year 2009 budget request to include, in part, a $125 million increase for food safety. This amount would add to the $42 million increase originally proposed in the fiscal year 2009 budget justification (see table 1) and appears to be consistent with the Commissioner’s professional judgment response. To accompany this amendment, FDA has posted information on steps it is taking to invest in its transformation in areas such as domestic medical products, import products, and domestic food safety. For example, under transforming domestic food safety, FDA reports that it issued final fresh cut produce guidance to limit contamination of fresh-cut fruits and vegetables. In addition, FDA conducted inspections and took action against processors of low acid canned foods that were deviating from required standards. In addition, in January 2008, we testified that the Food Protection Plan does not discuss the strategies it needs in the upcoming years to implement this plan. When we asked FDA for more specificity on the strategies for implementing the plan, FDA officials told us that they have internal plans for implementing the Food Protection Plan that detail timelines, staff actions, and specific deliverables. More recently, a senior level FDA official provided us with an estimate of 5 years for fully implementing the plan. However, FDA has not provided us with timelines for the various strategies described in the plan. For example, under the plan’s strategic action 2.3—to improve the detection of food system “signals” that indicate contamination (see table 1)—FDA has recently identified three additional action steps with deliverables that will be needed to identify, develop, and deploy new screening tools and methods to identify pathogens and other contaminants. However, FDA could not provide us with an estimate of how long it would take to implement these steps or the overall strategic action. Without this type of information, we are not able to assess whether FDA’s estimated 5-year time frame is feasible. Similarly, while FDA’s Food Protection Plan recognizes the need to partner with Congress to obtain 10 additional statutory authorities to transform the safety of the nation’s food supply, FDA’s congressional outreach strategy is general. When we asked FDA officials if they had a congressional outreach strategy, FDA officials told us that they had met with various congressional committees to discuss the Food Protection Plan. When asked if they had provided draft language to congressional committees on the various authorities, FDA officials explained that they only provided technical assistance, such as commenting on draft bills, to congressional staff when asked. FDA appears to be refining its implementation plan over time. Most recently, in June 2008, FDA provided us with a draft work plan that it characterizes as a dynamic document that changes on a daily basis to implement the Food Protection Plan. While this draft work plan provides more information on the action steps and deliverables to achieve the core elements, we continue to have concerns about FDA’s lack of specificity on the necessary resources and strategies to fully implement the plan. For example, as part of the plan’s strategic action 1.1—to promote increased corporate responsibility to prevent foodborne illnesses (see table 1)—FDA has identified a goal of analyzing food import trend data and focusing inspections based on risk, and the draft work plan shows six deliverables, such as analysis of import data sets and an import risk ranking, associated with this goal. However, the timelines for these deliverables are unclear. In addition, the agency plans to dedicate a total of $673,000 to this goal in fiscal years 2008 and 2009, and FDA officials told us that the agency considers this funding to be a down payment toward achieving this goal. However, it is unclear what the total cost will be to meet this goal. While the work plan provides some basic information, more specific information, such as estimated resources needed to implement the various strategies— the core elements, goals, and deliverables—as well as the overall plan and timeframes for implementing the strategies, are needed to assess FDA’s progress in implementing the plan or in acquiring the resources and authorities it needs. Anticipating the cost of the overall plan is important because, while some activities, such as meeting with industry experts to discuss corporate responsibility, may be accomplished within one budget cycle, others, such as the establishment of an FDA field office in China will likely require a long-term commitment of agency resources. From the information we have obtained on the Food Protection Plan, it is unclear what FDA’s overall resource need is for implementing the plan. The overall resource need could be significant. For example, if FDA were to inspect each of the approximately 65,500 domestic food firms regulated by FDA, at the Commissioner’s May 2008 estimate of $8,000 for a domestic food safety inspection, it would cost approximately $524 million to inspect all of these facilities once. Similarly, if FDA were to inspect each of the 189,000 registered foreign facilities (which includes facilities that manufacture, process, pack, or hold foods consumed by Americans) at the Commissioner’s estimated cost of $16,700 per inspection, it would cost FDA approximately $3.16 billion to inspect all of these facilities once. These figures underscore the need for FDA to focus safety inspections based on risk. Ultimately, a results-oriented organization needs to take a long-term view of the goals it wants to accomplish and describe them in a strategic plan. To facilitate congressional oversight, strategic plans should discuss (1) long-term goals and objectives for all major functions; (2) approaches to achieve the goals and objectives, and in particular the required resources including human capital and information technology; (3) a relationship between the long-term goals and the annual performance goals; and (4) an identification of key factors that could significantly affect achievement of the strategic goals. Such discussions in the Food Protection Plan could help clarify FDA’s organizational priorities to the Congress, other stakeholders, and the public. Lastly, when we testified before this subcommittee in January, we reported that FDA planned to keep the public informed of their progress on implementing the Food Protection Plan. In addition, in March 2008, FDA officials indicated that a progress report on actions taken to implement the Food Protection Plan would be issued in April 2008. In May, FDA officials told us that they had prepared a draft progress report, but as of June 4, 2008, FDA had not made this report public. FDA officials told us that the progress report is still being cleared by the Department of Health and Human Services, and they could not provide us with the report until it was cleared by the department. Instead, FDA officials provided us with a broad overview of FDA’s actions and, subsequently, provided us with a list of accomplishments drawn out of numerous public documents. For example, FDA issued a Federal Register Notice to solicit stakeholder comments on the implementation of the Food Protection Plan as part of a broad outreach plan. We have noted that public reporting is the means through which the federal government communicates the results of its work to the Congress and the American people. Such reporting is in the public interest and promotes transparency in government operations. While it is important to show what progress has been made, having such information in a consolidated document at a readily accessible location reassures Congress and the public that actions have been taken. The Food Protection Plan identifies the need to focus safety inspections based on risk, which is particularly important as the numbers of food firms have increased while inspections have decreased. In its Food Protection Plan, FDA has identified some actions to better identify food vulnerabilities and assess risks. For example, FDA plans to use enhanced modeling capability, scientific data, and technical expertise to evaluate and prioritize the relative risks of specific food and animal feed agents that may be harmful. According to FDA officials, the agency has assigned a risk-based steering committee to identify models for ranking and prioritizing risk. Conducting inspections based on risk has the potential to be an efficient and effective approach for FDA to target scarce resources, particularly when the number of inspections has not kept pace with the growth in firms between 2001 and 2007. Specifically, while the number of domestic firms under FDA’s jurisdiction increased from about 51,000 to more than 65,500, the number of firms inspected declined slightly, from 14,721 to 14,566. FDA also reported declines in the number of inspections at overseas firms between 2001 and 2007—even as the United States has imported hundreds of thousands of different food products from tens of thousands of foreign food firms in more than 150 countries. Appendix I has information on the number of FDA inspections of food firms in foreign countries from fiscal years 2001 through 2007. FDA has implemented few of our past recommendations to improve food safety oversight. Our recommendations are designed to correct identified problems and improve programs and operations. We have made 34 food safety related recommendations to FDA since 2004 and, as of May 2008, FDA has implemented 7. For the remaining recommendations, FDA has not fully implemented them, however, in some cases, FDA has taken some steps. As shown in table 2, these recommendations fall into two broad categories: improving monitoring and enforcement processes and leveraging resources. The planned activities in the Food Protection Plan could help address several of these recommendations. In light of the federal government’s long-term fiscal challenges, agencies, including FDA, need to seek out opportunities to better leverage their resources. We have made 13 recommendations to help FDA better leverage its resources since 2004, and FDA has implemented 4 of them. In a January 2004 report regarding seafood safety, we recommended that, among other things, FDA make it a priority to establish equivalence agreements with other countries. We found that such agreements would shift some of FDA’s oversight burden to foreign governments. FDA did not concur with this recommendation, and as of May 2008, has not yet established equivalence agreements with any foreign countries. In the same report, we recommended that FDA give priority to taking enforcement actions when violations that pose the most serious health risk occur; consider the cost and benefits of implementing an accreditation program for private laboratories; and explore the potential of implementing a certification program for third-party inspectors. Although FDA concurred with these recommendations and has taken some limited action such as requesting public comments on the use of third-party certification programs, none were fully implemented. The Food Protection Plan requests that Congress allow the agency to enter into agreements with exporting countries to certify that foreign producers’ shipments of high-risk products comply with FDA standards. Since 2004, we have made 21 recommendations to FDA to improve monitoring and enforcement processes, and FDA has implemented 3 of them. For example, in October 2004, we recommended that FDA develop a sound methodology for district staff to verify that companies have quickly and effectively carried out recalls. At the time of our review, we found that FDA was not calculating the recovery rate for recalls. As a result, the agency did not know how much food was actually recovered, although the agency told us recovery was an important indicator of a successful recall. FDA initially commented that we had not demonstrated that weaknesses in FDA’s recall process resulted in little recovery of food, but as of May 2008, the agency is in the process of conducting a quality management system review of its recall activities and, once the review is completed, it will include recommendations for verifying that a company’s recall was effective, according to FDA. To conclude, FDA’s release of the Food Protection Plan is a positive first step toward modernizing FDA’s approach to food safety to better meet the challenges of an increasingly global food supply and respond to shifting demographics and consumption patterns. Given that FDA’s resources have not kept pace with its increasing responsibilities, FDA’s plan to take a risk- based approach to inspections could help FDA make the most effective and efficient use of its limited resources. However, FDA’s Food Protection Plan can only be as effective as its implementation, and without specificity on the resources and strategies needed to fully implement the plan—and in the absence of public reporting—neither Congress nor the public can gauge the plan’s progress or assess its likelihood of success in achieving its intended results. In addition, no one is better poised than FDA to identify the resources and authorities needed to implement the plan; therefore, FDA’s capacity to provide such information can be questioned. Meanwhile, as foodborne illness outbreaks continue, FDA is missing valuable opportunities to reassure Congress and the public that it is doing all it can to protect the nation’s food supply. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames, Director, Natural Resources and Environment at (202) 512-3841 or [email protected]. Key contributors to this statement were José Alfredo Gómez, Assistant Director; Kevin Bray; Candace Carpenter; Alison Gerry Grantham; Thomas McCabe; Alison O’Neill; and Barbara Patterson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Food and Drug Administration (FDA) is responsible for ensuring the safety of roughly 80 percent of the U.S. food supply, including $417 billion worth of domestic food and $49 billion in imported food annually. Changing demographics and consumption patterns along with an increase in imports have presented challenges to FDA. At the same time, recent outbreaks, such as E. coli from spinach and Salmonella from tomatoes, have undermined consumer confidence in the safety of the food supply. In November 2007, FDA released its Food Protection Plan, which articulates a framework for improving food safety oversight. In January 2008, GAO expressed concerns about FDA's capacity to implement the Food Protection Plan and noted that more specific information about the strategies and resources needed to implement the plan would facilitate congressional oversight. This testimony focuses on (1) FDA's progress in implementing the Food Protection Plan, (2) FDA's proposal to focus inspections based on risk, and (3) FDA's implementation of previously issued GAO recommendations intended to improve food safety oversight. To address these issues, GAO reviewed FDA documents, such as FDA's operations plan, and FDA data related to the plan. GAO also interviewed FDA officials regarding the progress made. GAO also analyzed FDA data on domestic and foreign food firm inspections. GAO also analyzed the status of past recommendations. Since FDA's Food Protection Plan was first released in November 2007, FDA has added few details on the resources and strategies required to implement the plan. FDA plans to spend about $90 million over fiscal years 2008 and 2009 to implement several key actions, such as identifying food vulnerabilities and risk. From the information GAO has obtained on the Food Protection Plan, however, it is unclear what FDA's overall resource need is for implementing the plan, which could be significant. For example, based on FDA estimates, if FDA were to inspect each of the approximately 65,500 domestic food firms regulated by FDA once, the total cost would be approximately $524 million. In addition, timelines for implementing the various strategies in the plan are also unclear, although a senior level FDA official estimated that the overall plan will take 5 years to complete. Importantly, GAO has noted that public reporting is the means through which the federal government communicates the results of its work to the Congress and the American people. FDA officials told GAO that they had prepared a draft report on progress made in implementing the Food Protection Plan, but as of June 4, 2008, FDA told GAO that the Department of Health and Human Services had not cleared the report for release. The Food Protection Plan identifies the need to focus safety inspections based on risk, which is particularly important as the numbers of food firms have increased while inspections have decreased. For example, between 2001 and 2007, the number of domestic firms under FDA's jurisdiction increased from about 51,000 to more than 65,500, while the number of firms inspected declined slightly, from 14,721 to 14,566. Thus, conducting safety inspections based on risk has the potential to be an efficient and effective approach for FDA to target scarce resources based on relative vulnerability and risk. FDA has implemented few of GAO's past recommendations to leverage its resources and improve food safety oversight. Since 2004, GAO has made a total of 34 food safety related recommendations to FDA, and as of May 2008, FDA has implemented 7 of these recommendations. For the remaining recommendations, FDA has not fully implemented them, however, in some cases, FDA has taken some steps. However, the planned activities in the Food Protection Plan could help address several of the recommendations that FDA has not implemented. For example, in January 2004, GAO recommended that FDA make it a priority to establish equivalence agreements with other countries. We found that such agreements would shift some of FDA's oversight burden to foreign governments. As of May 2008, FDA has not yet established equivalence agreements with any foreign countries. The Food Protection Plan requests that Congress allow the agency to enter into agreements with exporting countries to certify that foreign producers' shipments of designated high-risk products comply with FDA standards.
|
TSA is responsible for administering background checks—known as security threat assessments—for maritime, surface, and aviation transportation security programs that have vetted approximately 15 million applicants since 2003, according to TSA officials. Security threat assessments are designed to ensure that only eligible individuals are granted TSA-related credentials, such as a TWIC. Specifically, security threat assessments focus on identifying threats posed by individuals seeking to obtain an endorsement, credential, access, and/or privilege for, among other purposes, unescorted access to secure or restricted areas of transportation facilities at maritime ports and TSA-regulated airports, and for commercial drivers transporting hazardous materials. Implementing these programs is a shared responsibility among multiple TSA offices, including the OIA Program Management Division which manages the programs, and the Adjudication Center within OLE/FAMS, which serves as the primary operational component for conducting security threat assessments for 12 of TSA’s 17 aviation, maritime, and surface transportation credentialing programs—with the TWIC, HME and Aviation Worker programs accounting for a reported 95 percent of the Adjudication Center’s workload. (See appendix I for a TSA organization chart showing TSA offices responsible for implementing transportation security threat assessment programs.) The security threat assessment process includes reviewing information to determine if applicants are disqualified to possess a credential based on criminal offenses, immigration status, or a link to terrorism. The security threat assessment involves two key components: Automated watchlist and related vetting: The initial automated vetting process is conducted to determine whether any derogatory information is associated with the name and fingerprints submitted by an applicant during the enrollment process. Among the checks conducted by TSA, one is against criminal history records maintained by or available through the Federal Bureau of Investigation (FBI). These records contain information from federal, state and local sources in the FBI’s National Crime Information Center database and the FBI’s Integrated Automated Fingerprint Identification System/Interstate Identification Index, which maintain criminal records and related fingerprint submissions. A check is also conducted against the Terrorist Screening Database, which is the federal government’s consolidated terrorist watchlist and from which the Selectee and No-Fly lists, among others, are compiled. To determine an applicant’s immigration/citizenship status, applicant information is checked against the Systematic Alien Verification for Entitlements system. If the applicant is a U.S.-born citizen with no related derogatory information, the system can approve the individual’s application for a credential with no further review of the applicant or human intervention. Adjudication Center review: A manual, second level review is conducted as part of an individual’s security threat assessment if (1) the automated vetting uncovers any derogatory information, such as a criminal offense or (2) the applicant has identified himself or herself to be a non-U.S.-born citizen or national. As such, not all applicants will be subjected to a second-level review. The Adjudication Center plays an integral role in the security threat assessment process by adjudicating cases for which an initial automated check finds potential links to criminal history or immigration eligibility issues. Adjudication Center staff review the program applicant’s enrollment file to determine if derogatory or other information may be potentially disqualifying. The applicant’s files are processed from credentialing program enrollment centers through two-web enabled case management systems, called the Screening Gateway and Consolidated Screening Gateway. Adjudication Center staff use the Screening Gateways as their tool for gathering, viewing, and synthesizing the information needed to conduct security threat assessments. Since its establishment in 2005, the Adjudication Center has relied primarily upon contractor staff to complete its security threat assessment workload, and a smaller number of federal government staff to conduct oversight and other functions. Contractor staff performs initial adjudication of cases, and may either approve applications if they determine an applicant is eligible to obtain a credential or refer the application to a federal (that is, TSA) adjudicator for further review if they determine the applicant to be ineligible. Federal staff review cases of potential ineligibility, issue Preliminary Determination of Ineligibility letters to applicants, and conduct redress actions, among other things. As of May 2013, TSA reported that about two-thirds (37 of 55) of Adjudication Center staff were contractors. Figure 1 shows the TSA credentialing process for the TWIC, HME, and Aviation Worker programs from enrollment through credential issuance, and the functions of the Adjudication Center’s TSA and contract staff in the security threat assessment process. Federal agencies face a complicated set of decisions in finding the right mix of government and contractor personnel to conduct their missions. While contractors, when properly used, can play an important role in helping agencies accomplish their missions, our prior work has shown that agencies face challenges with increased reliance on contractors to perform core agency missions. Consistent with Office of Management and Budget procurement policy, agencies should provide a greater degree of scrutiny when contracting for professional and management support, program evaluation, and other services that can affect the government’s decision-making authority—functions that may be considered as being closely associated with inherently governmental functions. Contractors can provide services that closely support inherently governmental functions, but agencies must provide greater scrutiny and enhanced management oversight to ensure that the contractors’ work does not limit the authority, accountability, and responsibilities of government employees. The DHS BWS refers to the department’s effort to identify the appropriate balance of federal and contractor employees required to support critical agency functions. Consistent with our recommendations and in accordance with the Omnibus Appropriations Act, 2009 DHS adopted the BWS in August 2010 to undertake risk analyses that are to enable the department to achieve the appropriate mix of Federal employees and contractors to accomplish its mission while minimizing mission risk that may result from an over-reliance on contractors. DHS uses an automated tool to help components—such as TSA—perform the necessary analysis to categorize work as appropriate for use of a contractor, inherently governmental, or closely associated with an inherently governmental function. The assessment tool is intended to facilitate an assessment of mission risk, level of contractor oversight needed, risk mitigation strategies, and cost analysis. Based on component responses, the tool is to provide a recommended sourcing decision on whether the work is appropriate for federal or contractor performance, or both. For example, should the BWS assessment find that a function is inherently governmental, the component would recommend the function be insourced to government employees, whereas a determination that the function was closely associated with an inherently governmental function would require the agency to either insource the function (also known as federalizing), or strengthen oversight of the contractor workforce. In December 2011, we reported that the Adjudication Center had faced recurring challenges in meeting its security threat assessment workload requirements and largely attributed these challenges to its reliance on a contractor workforce. Specifically, the Adjudication Center had experienced recurring backlogs in completing its caseload, and Adjudication Center officials attributed these backlogs to staffing limitations caused by contractor turnover. Officials at the time reported that the challenge was that the Adjudication Center had used three different contractors since establishing the Adjudication Center in 2005, and on each occasion the contract adjudicator turnover had led to backlogs as adjudicators were hired and trained. TSA reported that it did not consider the risks of acquiring contractor support services to provide adjudication services before awarding its first contract in 2005. Rather, TSA reported that it chose to use contract adjudicators when the Adjudication Center was created because, at the time, it considered them to be the most readily available workforce and effective way to augment federal staff with skilled resources. TSA reported that the agency had initiated an assessment in March 2011 through the DHS BWS process to determine whether the adjudication functions were appropriate to be performed by a contractor workforce, whether the work was inherently governmental and whether there would be cost savings resulting from conversion of the contract positions to government positions. We recommended TSA develop a workforce staffing plan with timelines articulating how the Adjudication Center will effectively and efficiently meet its current and emerging workload requirements, and incorporate the results of TSA’s study examining the appropriateness and costs and benefits of using contractors. TSA concurred with our recommendation and reported that it had begun taking steps to implement it. TSA has evaluated the Adjudication Center largely based on contractor performance in meeting established metrics and data shows mixed performance since 2011; however the Adjudication Center’s performance measures and practices are limited. We found that the Adjudication Center contractor met two of its three performance measures—for timeliness and accuracy—but did not do so for its caseload size measure. Further, these measures and practices were limited. For example, the Adjudication center’s methodology for calculating contractor adjudicator accuracy was limited because it did not include key information. Moreover, the Adjudication Center has not documented key elements of its performance measurement practices. TSA has used performance data for three primary metrics to measure the performance of the Adjudication Center in conducting security threat assessments for the TWIC, HME, and Aviation Worker programs. The three metrics are timeliness for completing initial adjudication, caseload size, and adjudication accuracy. According to TSA Adjudication Center officials, these performance measures were established to evaluate the performance-based contract for adjudication services at the Adjudication Center. Timeliness. The Adjudication Center contractor met timeliness standards for completing initial adjudication of its TWIC, HME, and Aviation Worker caseloads (see figure 1 for a description of this process). TSA requires that its contract adjudicator workforce complete initial adjudication of 95 percent of cases within 7 calendar days of the case entering TSA’s Screening Gateway case management systems for TWIC, HME, and Aviation Worker cases. According to TSA data, from August 2011 to January 2013, the adjudicator workforce met this standard for TWIC, HME, and Aviation Worker cases. While the Adjudication Center’s timeliness measure shows the Adjudication Center’s contractor met TSA’s standard for completing initial adjudication, the measure does not show the extent to which the agency has communicated its adjudication decision to the applicant in a timely manner—key statutory and TSA policy requirements for its credentialing programs—and TSA officials reported they did not maintain such documentation. For example, as specified in statute, TSA shall review an initial TWIC application and provide a response to the applicant, as appropriate, within 30 days of receiving the initial application. Moreover, officials with the OIA Program Management Division and Adjudication Center reported that TSA had established internal requirements for the agency to meet 30 day and 14 day applicant response times for HME and Aviation Worker applicants. Officials from the OIA Program Management Division reported tracking this measure through weekly Adjudication Center performance reports and identifying and addressing those cases that do not meet applicant response time standards. However, officials reported that they did not maintain documentation showing the extent to which TSA had responded to applicants within their applicant response timeframe requirements. Officials reported that maintaining such performance data would be of use, but noted it was rare that they did not meet their initial adjudication standards and respond to applicants on or within established applicant response time requirements. Officials noted that functional limitations in TSA’s Screening Gateway reporting system limits their ability to efficiently run reports showing the extent to which TSA responds to applicants within required timeframes. A senior OIA Program Management Division official reported it was her understanding that the division would have to obtain the capability to automatically run applicant response time reports from TSA’s Technology Infrastructure Modernization program, known as TIM. However, we reviewed TIM program documentation and did not find this data management capability requirement in TIM planning documents. We raised this issue with TSA TIM program officials, and in response to our inquiry, in May 2013, the TIM program added documentation of this requirement to its plans and reported that the capability would be available to the Adjudication Center beginning in March 2014 for TWIC program cases, and by 2016 for surface and aviation program cases. Caseload size. The Adjudication Center generally did not meet its contract caseload performance standards and experienced backlogs for its TWIC and HME program caseloads the majority of the time between October 2010 and January 2013. According to TSA contractor evaluation and performance reports, the Adjudication Center requires its contract workforce to maintain a total number of new TWIC, HME, and Aviation Workers cases at or below 1,500 cases—and Adjudication Center officials told us that a caseload above this threshold was considered a backlog. Adjudication Center data we reviewed for the period of October 2010 through January 2013 showed that the Adjudication Center had a backlog of HME cases approximately 60 percent of the time and TWIC cases approximately 61 percent of the time. In addition, the Adjudication Center had a backlog of Aviation Worker cases approximately 15 percent of the time from October 2010 through March 2012. Moreover, many of these backlogs were far higher than the Adjudication Center’s 1,500 caseload standard. For example, the Adjudication Center had a backlog of more than 4,000 HME cases roughly 16 percent of the time (20 of 122 weeks) during this period. Figure 2 shows Adjudication Center caseload levels for TWIC, HME, and Aviation Worker cases from October 2010 through January 2013. According to Adjudication Center officials and TSA documentation we reviewed, technical issues and a lack of sufficiently trained contract adjudicators contributed to the workload backlogs at the Adjudication Center. First, the Adjudication Center operations manager reported that technical problems with its case reporting systems had contributed to both challenges in assessing workload backlogs and, in some cases, growth in the backlog itself. For example, the Screening Gateway systems, which the Adjudication Center relies on for processing applicant cases and communicating results to TSA enrollment centers, has experienced periodic technical errors that have delayed the Adjudication Center’s ability to process new cases. According to TSA evaluation and performance reports we reviewed, between February 2012 and August 2012, TSA was unable to evaluate contractor performance in meeting its workload on several occasions, including approximately 3 months, because of technical problems with its case management systems. Adjudication Center officials reported that TSA’s Office of Technology was pursuing a solution to the technical errors with a solution expected by May 2012; however, as of May 2013 this had not been corrected. They also reported that TSA plans to replace this system with a more functional system through its TIM program, but as noted earlier, according to TSA’s schedule for the program and TSA officials, this system is not scheduled to be fully operational until 2016. In addition, since April 2012 TSA has experienced technical problems related to the Designated Aviation Channeler program that TSA uses to process Aviation Worker program cases into the Screening Gateway systems. According to TSA officials, technical problems with one of its vendors were delaying processing of cases and returning previously adjudicated cases into the Adjudication Center’s new caseload queue and not distinguishing between the two sets of cases. This was delaying processing time and Adjudication Center management was unable to determine the true extent of its new caseload. TSA officials responsible for managing the Designated Aviation Channeler program reported that they had been in discussions with the vendor since April 2012 to address the technical processing issues, and as of May 2013, the vendor was in the process of implementing corrective actions. Another factor contributing to growth in the workload backlog according to TSA Adjudication Center management officials and a contractor performance report we reviewed has been the lack of trained adjudicators provided by the contractor. According to a senior Adjudication Center official, the contractor lacked a sufficient number of staff who had been certified as self approvers, and this had required the Center’s limited federal staff to assume additional responsibilities and reduced the Center’s progress in meeting its caseload. Adjudication Center officials reported that they were working with the contractor to address this issue. We discuss the Adjudication Center’s contractor-related staffing issues, and actions to address them, in more detail later in this report. For its third key performance measure, TSA requires its contract adjudicators to maintain an average accuracy rate of at least 95 percent. According to Adjudication Center data, from August 2011 to December 2012, the Adjudication Center’s contract workforce met TSA’s accuracy standard for the TWIC, HME, and Aviation Worker programs. However, the accuracy rate is not a complete representation of Adjudicator contract accuracy because it does not include evaluation of a key population of cases. According to Adjudication Center officials, the Adjudication Center’s average accuracy rate is generally based on error rates identified from a daily review of all cases where adjudicators found an applicant was disqualified, but reviewers found an applicant should not have been (i.e., incorrectly disqualified). However, according to officials, this calculation generally does not include those cases where adjudicators had approved applicants, but reviewers found they should have been disqualified (i.e., incorrectly approved). For example, according to our analysis of TSA data, approvals comprised over 90 percent of the Adjudication Center’s TWIC and HME caseload from August 2011 to January 2013—and TSA reviewed roughly 7 percent of these approvals. In this way, the average accuracy rate TSA uses to evaluate the performance of its contractor is incomplete and limited because it does not include the extent contract adjudicators incorrectly approved applicants. The Adjudication Center official responsible for reporting the accuracy rate told us that the accuracy rate of the contract workforce includes only those cases that were incorrectly found to have disqualifying factors because that is how the contract evaluation standards were established. The official noted that the Adjudication Center processes included a review of all trainee adjudicators approved cases and a separate quality assurance review process to spot check approved cases to identify errors among all adjudicators who are certified to approve cases without further review. However, the official reported that these performance measurement practices were not documented and that a lack of staffing capacity had limited the extent to which the Adjudication Center conducted the quality assurance spot checks—with the Center meeting only about two-thirds of its 10 percent goal for the number of cases selected for spot checking. Nonetheless, the results of this quality assurance review are not factored into the rate TSA uses to measure contractor accuracy performance and award funds to its contractor. Standards for Internal Control in the Federal Government specifies the need to comprehensively identify risks and consider all significant interactions. Once risks have been identified, they should be analyzed for possible effect. Moreover, internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for review. The overall accuracy rate calculated by the Adjudication Center is generally limited to incorrectly disqualified cases and does not include incorrectly approved cases. In this way, TSA does not have a representative assessment of the Adjudication Center’s average accuracy rate. If error rates for approved cases were included in its evaluation, the Adjudication Center’s reported average accuracy rate may ultimately be higher or lower than it has reported—but it will remain unclear until the Adjudication Center captures this information in its accuracy rate. Determining the performance of the workforce in adjudicating security threat assessments for this population is important for overseeing adjudicator performance and identifying cases where the Adjudication Center is incorrectly approving applicants. By developing and documenting an accuracy rate measure that includes data on both types of incorrectly adjudicated cases (approved and disqualified), the Adjudication Center can determine an accuracy rate that comprehensively captures accuracy performance and enables Adjudication Center management to more effectively identify and address performance issues among its workforce. Since beginning operations in 2005, Adjudication Center management officials told us that they have used a complex, manual process to track the performance data of its contract adjudicator workforce. In particular, because of functional limitations of TSA’s Screening Gateway systems, officials reported that the Adjudication Center lacks an automated process for tracking adjudicator performance of the estimated 7,500 to 10,000 security threat assessment cases that adjudicators process each week. As a result, Adjudication Center management has used a cumbersome, manual process to track case production and performance of its contract adjudicator workforce. For example, each week, one adjudication center official is responsible for reviewing contractor reported caseload information, compiling spreadsheets summarizing contractor performance, verifying and reconciling the information with the contractor, and preparing weekly summary reports for distribution to TSA credentialing program stakeholders. Adjudication Center management told us that it has used these reports to measure Adjudication Center performance and support oversight of its contract adjudicator workforce. The manual process exists because, according to Adjudication Center officials, TSA’s Screening Gateway case management systems were not designed to meet the functional requirements of the Adjudication Center for tracking contractor operational performance, and TSA has been unsuccessful to date in developing a technical solution to do so. TSA officials recognized that the system did not meet the needs of the Adjudication Center and reported that the agency’s TIM program would replace the Screening Gateway systems and enable the adjudication center to automate its case tracking and performance requirements. However, as noted earlier, TSA officials reported that this new system is not scheduled to be fully operational until 2016. In the meantime, however, Adjudication Center management officials reported that they had not documented the manual process currently in use. Adjudication Center management officials told us that they had placed some information on an internal web sharing system in the past, but that this information was neither thorough nor updated to reflect the case management reporting system that the Adjudication Center has used since 2010—when TSA began its most recent contract for Adjudication Center staff. According to Adjudication Center officials, time constraints in meeting the Adjudication Center’s workload of security threat assessments had been a factor that had prevented the Operations Manager from updating or developing new documentation of the procedures in recent years. Further, given the complexity of the process and that two officials were familiar it, a senior Adjudication Center management official said that documenting this process would be of value should the two officials be unavailable. Standards for Internal Control in the Federal Government specifies the need for appropriate documentation of transactions and internal control. Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for review. The documentation should be included in directives, policies, or manuals to help ensure operations are carried out as intended. Documenting the Adjudication Center’s case reporting performance measurement practices is important to allow someone unfamiliar with this process to assume responsibilities in the event of attrition by the Adjudication Center managers. This is particularly important considering the complexity of the Adjudication Center’s case performance reporting process and TSA’s need to ensure effective performance and operational continuity in its security threat assessments. Implementing credentialing-related programs is a shared responsibility between the Program Management Division in TSA’s OIA and the Adjudication Center in OLE/FAMS. Officials from these offices reported taking various actions to ensure its offices coordinate information related to security threat assessment adjudication workload planning and performance. These include: Sharing weekly Adjudication Center performance reports: These reports include information for the TWIC, Aviation Worker, and HME programs such as the number of cases the Adjudication Center receives for each of these programs during the prior week, the number of cases ready for adjudication, and the number of applicants who have sought redress based on initial determinations of ineligibility. The three program managers for TSA’s maritime, aviation, and surface credentialing programs reported that they rely on these reports to ensure the program offices are meeting workload demands for the various credentialing programs TSA supported, and to identify and develop strategies to address performance challenges. Convening monthly program management review meetings: These meetings are used to share information relating to changes that may impact the Adjudication Center’s workload. As part of these meetings, the Adjudication Center contractor provides a monthly report which provides details pertaining to contractor staffing levels and changes, training status, contractor accuracy rates, and challenges in need of resolution. Developing spend plans: Adjudication Center and the OIA Program Management Division officials meet to develop a spend plan to support the credentialing programs’ annual budgets and discuss population projections for the programs that would affect Adjudication Center workload. For example, the Aviation Worker program manager reported that the workload had increased by about 5 percent annually, and that this information was used to inform the Adjudication Center’s spend plan. Notwithstanding these actions, opportunities exist for the Adjudication Center and OIA Program Division to strengthen their coordination. While officials with the two offices coordinate on a routine basis to share information on workload completion, they do not have a process in place to ensure that information in the Adjudication Center’s staffing plan—such as caseload projections and associated staffing needs—reflects the mutual understanding of both Adjudication Center and credentialing program management officials. For instance, Adjudication Center management officials have periodically updated a staffing plan that they use to guide Adjudication Center workforce planning. However, an Adjudication Center program management official reported that while the staffing plan had been shared with credentialing program managers in the past, it had not been shared in recent years. He reported that a prior plan had been shared with the OIA Program Management Division to communicate staffing needs, and said that sharing the updated versions of the staffing plan with the Program Management Division may be valuable for guiding decisions on workforce planning. OIA Program Management Division officials reported that they were unfamiliar with the Adjudication Center’s staffing plan and questioned workload projections in the Adjudication center’s current staffing plan. For example, the current Adjudication Center staffing plan cites an anticipated regulation that will address the security threat assessment process and that according to the plan would double the Adjudication Center’s security threat assessment workload from 500,000 to 1 million per year by the end of fiscal year 2014, and triple the workload by the end of fiscal year 2015. In October 2012, we shared this staffing plan with the OIA Program Management Division Manager responsible for Aviation programs and that official questioned the accuracy of the aviation worker workload increase projections in the staffing plan. The official said that TSA had yet to issue this regulation and that the timeline for doing so would take longer than officials had initially planned. Thus, the projected workload increases in the Adjudication Center’s staffing plan would not be realized, and the plan would need to be revised. However, as of March 2013, the Adjudication Center’s staffing plan had not been updated. According to key collaboration practices that we have identified, federal agencies engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. Reporting on these activities can help key decision makers within the agencies, as well as clients and stakeholders, obtain feedback for improving both policy and operational effectiveness. Such reporting mechanisms can then be used to modify plans. Moreover, a focus on results, as envisioned by the Government Performance and Results Act, as amended, implies that federal programs contributing to the same or similar results should collaborate to ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. In this way, ensuring these components have access to respective workforce planning documents by establishing a mechanism for OIA Program Management Division and Adjudication Center officials to share and reconcile information included in the Adjudication Center’s staffing plan updates, such as timelines for anticipated workload growth, will help ensure TSA is using accurate workload projections to guide the Adjudication Center’s workforce planning. Between January 2011 and September 2011, TSA conducted and completed its DHS required BWS assessment for the Adjudication Center contract and determined that the adjudicator position represented work that is “closely associated with inherently governmental functions” and that an excessive risk exists by allowing contractors to make security credential approvals without sufficient federal oversight. According to the assessment, the adjudicator functions performed by the contractor are critical to TSA’s accomplishment of the security threat assessment process to ensure terrorist and other security threats are identified and prevented from gaining credentialed access to critical U.S. transportation system infrastructure. The assessment found that TSA was reliant upon contractors for making decisions regarding criminal history and immigration status for a majority of applicants and if contractors were to continue performing the adjudicator function, the government would need to provide continuous and substantive oversight of them to ensure successful performance. However, the assessment found that the Adjudication Center did not have an effective oversight process in place to do so—noting that the federal government staffing at the Adjudication Center is not sufficient to adequately oversee contractor case processing for quality control, as contract staff have independent decision making ability on the majority of cases. Further, the assessment noted that the Adjudication Center’s use of a mixed contractor and government workforce was inefficient. For example, according to the assessment, for every contractor work hour, a federal government employee must check that work and this had led federal government staff to work more than 2,500 overtime and compensation hours over the preceding year—an inefficient and duplicative process that would not be necessary if the workforce were all federal government officers. In light of these factors, in October 2011, TSA’s BWS Departmental Working Group determined that the adjudicator function was closely associated with inherently governmental functions and recommended TSA end the Adjudication Center’s reliance on a contract workforce and convert to an all federal employee workforce. The working group reported that doing so was designed to improve the Adjudication Center’s security threat assessment processing by having a better oversight process, streamlining overall operations, reducing training requirements, and better managing resources. TSA has been delayed in implementing the proposed workforce conversion at the Adjudication Center. According to TSA’s May 2012 Adjudication Center conversion plan, TSA offices were to take several actions before the Adjudication Center may begin implementing its conversion plan and hiring a new federal employee workforce; however, these steps have generally not been implemented. For instance, as of May 2013, responsible stakeholders in TSA’s BWS effort, including the Office of Human Capital and OLE/FAMS, reportedly had not approved the plan—necessary steps before the plan can proceed to TSA leadership, and ultimately, DHS for review. According to the conversion plan, TSA proposed to convert to a government adjudicator workforce by hiring TSA employees during fiscal years 2013 and 2014—with the hiring to be completed by the end of calendar year 2013. However, as of May 2013, TSA had not begun hiring its new federal workforce and TSA officials reported that the agency had not determined new timelines to do so. TSA officials attributed the agency’s delays in implementing the Adjudication Center conversion plan to its prioritization on implementing agency reorganization efforts. According to a senior TSA official, implementing the BWS assessment was delayed because TSA was undergoing a large reorganization and agency resources were prioritized to that effort. With this reorganization completed in January 2013, the official reported that implementing the conversion plan would become a greater priority. TSA’s delay in acting on the recommendations of Adjudication Center conversion has rendered the implementation timelines and key hiring level and cost information in its May 2012 conversion plan outdated or unclear and TSA has not updated the plan to reflect these changes. In particular, TSA’s plan to convert to an all-federal Adjudication Center workforce has not been updated although information in this plan, such as the timeline for hiring federal employees and cost information, is no longer valid or is unclear. For example: The implementation schedule in TSA’s plan is no longer valid. TSA officials responsible for managing the conversion effort acknowledged that the timelines for implementing its plan had been delayed, and that TSA would not complete its workforce conversion by the end of calendar year 2013 as proposed in its plan. TSA officials reported that determining a revised schedule for the Adjudication Center conversion was dependent on various factors, such as when responsible TSA offices completed their respective reviews, and when OLE/FAMS approved the plan and sent it to TSA leadership for review. TSA officials reported that TSA did not have timelines for when this would occur. Implementing the in-sourcing plan may present a cost saving opportunity to TSA, but TSA is unclear on extent of those savings. According to an August 2012 OLE/FAMS memorandum, converting from contractor to federal employees at the Adjudication Center would save the federal government over $5.4 million in fiscal years 2013 and 2014. However, TSA’s May 2012 conversion plan reports that the conversion plan would result in approximately $1 million in savings, rather than the $5.4 million cited in an August 2012 OLE/FAMS memorandum. According to the May 2012 plan, TSA used the DHS Modular Cost Table to determine the potential cost savings from converting to a federal employee workforce at the Adjudication Center. TSA budget officials reported that they could not determine why the cost savings estimates varied between the May 2012 conversion plan and the August 2012 memorandum. The officials reported that the cost savings estimate was still speculative and that TSA would need to revisit its calculations. As of May 2013, TSA had provided no further information. TSA officials involved in the BWS Adjudication Center conversion noted that the delays in implementing the plan may pose challenges for TSA. For example, TSA’s contract for the Adjudication Center is a performance-based contract, with 1 option year remaining that would begin in February 2014 and run to January 2015. The official reported that continuing the contract would delay TSA from potential cost savings, while the cost to TSA continuing the contract increases 3 percent per option year. Officials reported that if they did not begin hiring new federal employees by August 2013, they would need to begin the process to recompete the Adjudication Center staffing contract to ensure continuity of operations in the case that TSA does not implement its conversion plan. According to DHS BWS guidance, the more important the function, the more important it is to have internal capability to maintain control of the department’s mission and operations. TSA’s BWS assessment found that (1) TSA lacked sufficient internal capacity to control its use of contractors in Adjudication Center mission and operations, (2) TSA’s reliance on a contractor workforce carried excessive risk, (3) the adjudicator functions were closely associated with inherently governmental functions, and (4) that the positions should be insourced. This assessment was made almost 2 years ago. While senior TSA Adjudication Center management officials support implementation of the plan, collectively, TSA has not mitigated the risks and operational inefficiencies identified in the DHS BWS assessment. Moreover, TSA has not completed its internal review of the conversion plan, including determining a revised implementation schedule as well hiring target levels and cost information. Completing this review, determining this information, and updating the conversion plan to ensure the plan reflects current conditions and an estimation of cost savings will help TSA and DHS decision makers by providing a roadmap for moving forward. Finally, implementing TSA’s Adjudication Center workforce conversion will be important to ensure TSA has sufficient and appropriate adjudication personnel to make the decisions that may deny or allow individuals unescorted access to the nation’s critical transportation infrastructure. TSA’s Adjudication Center plays a critical role by conducting security threat assessments to ensure individuals posing security threats are identified and are not granted TSA-related credentials for, among other things, unescorted access to secure areas of the nation’s transportation systems. However, the Adjudication Center has faced challenges in fulfilling this role. First, while the Adjudication Center uses three key measures to evaluate the performance of the Adjudication Center’s contract workforce, it has not documented its methods and two of its measures are limited. For example, TSA’s timeliness measure does not capture the extent to which the agency has communicated its adjudication decision to the applicant in a timely manner—a key TSA requirement for its credentialing programs—and TSA officials reported they did not maintain such documentation. Ensuring that the TIM program provides the capability for Adjudication Center officials to efficiently prepare and document applicant response time reports would help ensure TSA meets standards and decisionmakers identify and address performance challenges. In addition, the Adjudication Center’s accuracy rate generally does not include cases in which contract adjudicators incorrectly approved an applicant—and these constitute roughly 90 percent of the Adjudication center’s caseload. Developing, documenting and implementing an accuracy rate that includes this information will provide TSA with a more complete assessment of the performance of its workforce—regardless of whether the members of that workforce are contractors or TSA employees. Second, because of functional limitations in its case reporting systems, Adjudication Center management uses an undocumented, manual, process to track adjudicator performance. Documenting the Adjudication Center’s case reporting performance measurement practices is important to ensure continuity of operations in the event of attrition by the two Adjudication Center officials familiar with this process. Third, the Adjudication Center relies on its staffing plan to guide its workload planning decisions but has not shared updated versions of this plan with the credentialing program offices that it serves. Establishing a mechanism for the Adjudication Center to share and reconcile information included in the staffing plan updates, such as timelines for anticipated workload growth, would help improve internal coordination and support the Adjudication Center’s workload planning efforts. Fourth, TSA’s 2011 BWS assessment for the Adjudication Center found that the adjudicator function is closely associated with inherently governmental functions and recommended that TSA insource its Adjudication Center workforce to mitigate the risks that contractors were making security credential approvals without sufficient federal oversight. Taking additional steps to end its use of contract adjudicators and convert to an all-federal employee adjudicator workforce would help TSA mitigate such risks, but it has been delayed in doing so. Completing its internal review and updating and documenting the conversion plan to ensure the plan reflects current condition conditions, including timelines for hiring, planned hiring numbers, and cost information would help TSA and DHS decision makers by providing a roadmap for moving forward. Finally, providing this plan to TSA and DHS leadership for review are important steps to help ensure TSA addresses the risks identified in the 2011 BWS assessment and has an appropriate workforce to make the decisions that may ultimately deny or allow individuals credentials for unescorted access to the nation’s critical transportation infrastructure. We recommend that the Secretary of Homeland Security direct the TSA Administrator to take the following 5 actions: To ensure that the Adjudication Center accuracy rate effectively captures the center’s accuracy in completing security threat assessments, the Adjudication Center should develop an accuracy rate measure that includes accuracy data for cases where adjudicators both approved and disqualified applicants, document this methodology, and implement the process. To ensure continuity of case reporting, the Adjudication Center should document its case reporting performance management processes. To ensure workforce planning is based on accurate workload projections, establish a mechanism for TSA’s OIA Program Management Division and OLE/FAMS Adjudication Center to share and reconcile information included in the Adjudication Center’s staffing plan updates, such as timelines for anticipated workload growth. To advance efforts to address risks identified in the Adjudication update and document its Adjudication Center insourcing conversion plan to reflect revised schedule timeframes, cost and hiring level information, and review the updated Adjudication Center insourcing conversion plan, and provide it to TSA and DHS leadership for review and implementation approval. We provided a draft of this report to DHS for review and comment. DHS, in written comments received July 2, 2013, concurred with all five of the recommendations in the report and identified actions taken, planned, or under way to implement the recommendations. Written comments are summarized below, and official DHS comments are reproduced in appendix II. In addition, DHS provided written technical comments, which we incorporated into the report, as appropriate. In commenting on the draft report, DHS described efforts underway or planned to address our recommendations. DHS also noted that the Adjudication Center’s caseload performance measure of keeping backlogs below 1,500 cases is a self-imposed standard that TSA established to provide the best possible customer service to applicants. We agree that the Adjudication Center’s caseload performance measurement was developed by TSA. Regardless of source, however, TSA’s caseload standard is a contractual requirement and our analysis of TSA data found that the Adjudication Center contractor generally did not meet this requirement between October 2010 and January 2013. In addressing our recommendations, DHS concurred with our first recommendation that TSA should develop an accuracy rate that includes accuracy data for both cases where an applicant is approved and cases where an applicant is disqualified, document this methodology, and implement the process. DHS stated that TSA OIA will modify its current quality control process to include both approved and disqualified cases that will more accurately reflect the adjudications performed. Furthermore, DHS reported that it will develop, document, and formalize an accuracy rate measure that includes review of approved and disqualified cases. Such actions will ensure that the Adjudication Center’s accuracy rate measure provides a more comprehensive assessment of adjudicator performance. DHS also concurred with our second recommendation that TSA should document the Adjudication Center’s case reporting performance management processes. DHS stated that while TSA anticipates that the current manual process will be phased out and replaced by an automated process as the TIM program is implemented, TSA OIA will document the current manual performance management process. DHS stated that documenting the process will confirm the Adjudication Center’s performance is accurately tracked and will also ensure continuity in the event of personnel turnover. These actions, if implemented effectively, should address the intent of our recommendation. Regarding our third recommendation that OIA’s Program Management Division and the OLE/FAMS Adjudication Center should establish a mechanism to share and reconcile information included in the Adjudication Center’s staffing plan updates, such as timelines for anticipated workload growth, DHS concurred. DHS reported that the OIA Program Management Division and the OLE/FAMS Adjudication Center were already working to resolve the issues and had begun coordination to ensure security threat assessment workload estimates and the staffing plan are updated. DHS stated that TSA will formalize a quarterly review process between the Program Management Division and the Adjudication Center to meet and discuss these issues. DHS concurred with our fourth recommendation to update and document the Adjudication Center’s insourcing conversion plan to reflect revised schedule timeframes and cost and hiring level information. In its comments, DHS stated that OIA is working with the DHS Office of the Chief Human Capital Officer to address any potential issues posed by using a mix of government employees and contractors. Furthermore, DHS reported that TSA will update its insourcing conversion plan to reflect current timelines, costs, and hiring levels. Such actions should improve TSA’s ongoing insourcing efforts. Lastly, DHS concurred with our fifth recommendation that TSA review the updated Adjudication Center insourcing conversion plan and provide it to TSA and DHS leadership for review and implementation approval. DHS stated that OIA has already begun updating the insourcing conversion plan and intends to provide it for review and approval. We will continue to monitor DHS’s efforts. As arranged with your office, unless you publicly announce its contents earlier, we plan on no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Assistant Secretary for the Transportation Security Administration, and appropriate congressional committees. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix III. Figure 3 shows that responsibility for carrying out programs related to issuing credentials for transportation workers is divided among multiple Transportation Security Administration (TSA) offices. In particular, the TSA Office of Intelligence and Analysis manages transportation security credentialing programs—including the three largest programs: the Transportation Worker Identification Credential (TWIC) program for maritime workers; the Hazardous Materials Endorsement (HME) program for truckers seeking a commercial driver’s license endorsement to carry hazardous materials; and the Aviation Worker program. Within the Office of Law Enforcement/Federal Air Marshal Service, the Adjudication Center is responsible for providing security threat assessment adjudication services to meet the workload needs of TSA programs. Figure 3 TSA Organization Chart Showing Key Offices Responsible for Managing and Implementing Transportation Security Threat Assessment Programs, as of May 2013. In addition to the contact names above, David Bruno (Assistant Director), Jason Berman (Analyst-in-Charge), Carl Barden, Melissa Bogar, Jennifer Dougherty, Eric Hauswirth, Richard Hung, Thomas Lombardi, Stephen M. Lord, Steve Morris, Jessica Orr, Minette Richardson, Katherine Trimble, and Jonathan Tumin made key contributions to this report.
|
TSA implements programs that, for example, ensure individuals with unescorted access to secure areas of the nation's critical transportation infrastructure do not pose a security threat. Key to these programs are security threat assessments that screen individuals for links to terrorism, criminal history, and immigration status. TSA's Adjudication Center serves as the primary operational component in this process. GAO was asked to examine the performance and staffing strategy of the center. This report addresses the extent to which 1) TSA has measured performance for the center and what the data show; 2) TSA offices have coordinated to meet security threat assessment workload; and 3) TSA addressed potential risks posed by using a mix of government employees and contractors to adjudicate security threat assessments. GAO analyzed TSA data describing the center's performance since October 2010; reviewed documentation, including staffing plans; and interviewed TSA officials about data measurement and staffing practices. The Transportation Security Administration's (TSA) Adjudication Center performance data show mixed results, and the center's performance measurement practices have limitations. The Adjudication Center relies on contractors to adjudicate security threat assessments and uses three primary measures to evaluate their performance--timeliness for completing adjudication, adjudication accuracy, and caseload status. GAO found that the Adjudication Center contractor met its timeliness and accuracy measures, but faced challenges in meeting its caseload measure. The Adjudication Center's timeliness and accuracy measures did not capture key data. According to TSA officials, the Adjudication Center's accuracy rate is based on a review of all cases where adjudicators had disqualified an applicant. However, this calculation generally does not include the accuracy rate for those applicants adjudicators had approved--which account for roughly 90 percent of the Adjudication Center's caseload. In this way, the accuracy rate provides a limited assessment of adjudicator performance. By developing an accuracy rate that includes data on both incorrectly disqualified and incorrectly approved applicants, TSA can better identify and addresses performance issues among its workforce. Two TSA offices that share responsibility for implementing security threat assessments--the Program Management Division in the Office of Intelligence and Analysis and the Adjudication Center in the Office of Law Enforcement/Federal Air Marshal Service--can improve coordination on workforce planning. While the offices share information on workload completion, they do not have a process in place to ensure that information in the Adjudication Center's staffing plan--which the Adjudication Center periodically updates to reflect caseload projections and associated staffing needs--reflects the mutual understanding of both offices. For example, program managers in the Office of Intelligence and Analysis reported to GAO that they were unfamiliar with the staffing plan and they disagreed with workload projections in the plan. Establishing a mechanism for the offices to share and reconcile information in the plan can help better support the Adjudication Center's workforce planning. TSA has been delayed in addressing risks posed by using contractors to adjudicate security threat assessments. In October 2011 TSA's Balanced Workforce Strategy Working Group completed its assessment for the Adjudication Center and determined that an excessive risk exists by allowing contractors to make security threat assessment approvals without sufficient federal oversight. The Working Group recommended that TSA convert to an all government workforce. According to a May 2012 implementation plan, TSA planned to convert this workforce by the end of calendar year 2013. However, delays have rendered the timelines and cost information in its plan outdated and TSA has not updated the plan or determined a revised implementation schedule. Completing this review and updating the plan would help TSA and Department of Homeland Security (DHS) decision makers by providing a roadmap for moving forward. Finally, providing this plan to DHS for review will be important to help ensure TSA can begin its conversion and mitigate identified risks of using contract adjudicators to conduct security threat assessments. GAO recommends that TSA, among other things: direct the Adjudication Center to calculate an accuracy rate that includes adjudicator performance for cases where applicants were both approved and disqualified; share adjudicator staffing plans among key program offices; and update its Adjudication Center workforce conversion plan and provide it to DHS for review and approval. DHS concurred with our recommendations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.